score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
58 | IB Physics/Wave Phenomena
11.1 Traveling waves
I assume what we're talking about here is y = Asin(wt ± kx), the equation in the data book. This can be used to describe a traveling wave as follows.
The amplitude is A (because sine curves range from 1 to -1, multiplying by A will make it range from A to -A). w is defined as 2 x π x f and k is defined as 2 x π/λ. The value of t will shift the whole curve to the left or right (assuming the curve is positive in the middle, increasing t moves it to the left, decreasing to the right).
The period of the curve will be defined by the wavelength and frequency in the equation. It's a good idea to play around with this on a graphing calculator to get a feel for it. Anyway, the equation is obviously used for modeling waves.
This is effectively what we already described in the standard level section, but is repeated below for convenience.
Displacement vs time : This graph tracks the movement of a particle as a wave moves through it. With displacement on the vertical axis, and time on the horizontal. The particle will move up and down in a sine curve type pattern. This graph allows us to find both frequency (which will be the number of crests in 1 sec) and period (which will be the time between crests), but tells us nothing about the wave speed or wavelength.
Displacement vs position : This is basically a 'snapshot' of the displacement of all the particles going through the medium at a given time. Displacement is on the vertical axis, and position (or ie distance from an arbitrary origin in the material) is on the x. The distance between peaks represents the wavelength. The wave speed can not be calculated directly from this graph, but can be found by combining the information from this and the displacement vs time graph (as described in the next section).
Huygens' principle is a geometrical representation of how waves move through media. Each wave front is assumed to be an infinite number of point sources, each radiating in a circle. After a given period of time, a new wave front is drawn along the edges of these radiated circles, and the process is repeated.
To draw it on paper, start with a wave front, place a number of points, and from these, draw the waves being emitted as if each of these were a point source. When unobstructed, this results in a series of circles, but obstructions can change things. Waves could be reflected or absorbed by an object, waves entering a medium of higher optical density will slow down (and so won't go as far).
After a given period of time (depends on the speed of the wave), draw an new wave front running along the edges of these circles as appropriate for the situation. The process is repeated over and over until it gets so boring that you stop. This sort of diagram helps to explain some of the phenomena of waves.
- Diffraction : A very thin slit will only have a single point source, and so it will radiate in a circle, or wraps around an object, but you really need to draw a diagram to see that.
- Refraction : As the wave enters the more dense medium at an angle, the leading edge slows down, pulling the wave around to a new angle.
This model can be applied to any waves but in questions they will probably be light, water or sound.
Partial reflection occurs when ever light changes media. For example, when light goes from water to air, some light is reflected from the boundary (the same occurs from air to water).
Total internal reflection occurs when light enters a boundary (from the more dense side) at an angle greater than the critical angle, and all the light is refracted back into the original medium. This critical angle can be found by inserting 90 as the angle of refraction in Snell's law, thus creating n1 x sin ic = n2. Any wave striking at an angle of incidence above this critical angle will totally internally reflect. When the angle of incidence equals the critical angle, the light will run along the boundary, and below it, refraction (and partial reflection) will occur as usual.
Light through optical fibres : This is used both as a communication system, and as a sort of camera in hard to reach places. Light is totally internally reflected through the glass core, which can be bent as long as the light passing through it does not exceed the critical angle (see the optics option for more info).
Prismatic reflectors : Glass has a critical angle above 45, and so it is possible to use a glass, right angled, triangular prism as a reflector. Light enters the longest side, bounces off one side, off the other, then out the way it came in...this is more effective than using a mirror because 100% of the light is reflected, where as mirrors are never 100% efficient. This set up can also be rearranged to build a periscope (light goes an and out the two short sides, bouncing off the long one) without mirrors.
Air near hot surfaces : For example, the pavement seems to be wet and shiny on a exceptionally hot day. Since hot air rises, there is a "gradient" of temperatures of air. Since air's refractive index changes with temperature, Light waves travelling through this will refract according to the temperatures of air, causing the light from the sun to gradually defract with a elliptical path - "Bending" From its original (incident) angle, to being nearly horizontal, to finally travelling upwards into your eye. The virtual rays thus create an illusion that the light source (the sun)is on the floor.
Refractive index is dependent on the wavelength of the wave, thus different wavelengths of light will be refracted different amounts through the same boundary. Short wavelength light will be refracted more, and long wave length less. This means that if white light is shone onto a prism, then the light can be separated out into it's component colours, red being refracted the least, and violet the most.
11.2 Interference and Diffraction
This first bit might seem familiar for the standard level notes.
If, for example, we have two point sources producing waves in a circle, they will interfere differently at different points. The easiest way to do this is to draw circles out from the source representing the crests (Except now we can call this Huygen's principle). When two of these coincide, constructive interference produces a bigger crest. When two gaps coincide, we get a bigger trough, when one crest and one trough coincide, there is destructive interference, and they add to zero. This allows the interference pattern, and the amplitude at each point to be found.
Also relevant to the discussion of Huygen's principle is that fact that these point sources effectively produce a wave front, since other parts of the wave interfere destructively, thus demonstrating how exactly Huygen's principle can be accounted for (beyond being a geometric representation).
For two sources to be coherent, they must emit identical frequency waves, in the same phase (i.e. when one emits a crest, so must the other). Path difference is the difference between the distances of a certain point from each source. If the path difference is a multiple of the wavelength, then constructive interference (an anti-node) is produced, if it's a multiple + λ/2 complete destructive interference occurs (producing a node). Points in between these two have something between a node and an anti-node. The pattern produced is a series of lines pointing away from the point exactly between the sources, and alternating constructive-destructive-constructive out from the centre.
Light strikes the two slits, and then produces two coherent point sources next to each other.
- Light striking the center of the screen has an equal path difference from both, and so produces a bright band on the screen level with the slit (since the light is spread over the smallest area).
- Light travelling out at such an angle that the light from the top source must travel exactly λ/2 further than the bottom one to reach the screen. This means the two waves are out of phase, and destructively interfere on the screen (producing a dark spot). As we move further around, the path difference will be 1 wavelength, they will reinforce, and produce a bright band, and so on alternating.
This experiment can be defined by the equation m x λ = d sin Ø = xd/D, where d is the distance between the centres of the two slits, x is the bandwidth (distance between consecutive bright bands on the screen) and D is the distance to the screen. The xd/D bit assumes a curved screen, but it's find for a flat screen so long at you're not too far from the centre. I don't know if this is really necessary, but see the optics option section for more detail.
These notes comet out of the optics section, so there might be too much detail.
Thin films : The classic example of this is a thin layer of oil (assumed to have lower refractive index than water) floating on top of water. This produces a sort of rainbow effect in the right light conditions as a result of the following. When light enters the oil, some of it is reflected (with a phase change). The remaining light continues down and some is reflected of the oil-water boundary (again with a phase change, meaning the two can be ignored. If the film is like a soap bubble, however, only one phase change will occur, and it must be accounted for). This means that if the film is a certain thickness, certain wavelengths will be reinforced will others will destructively interfere. This is how they make those sun glasses which look red from the outside. Note, The light is always assumed to enter and leave vertically, though it will be easier to draw at an angle, this should be noted with any diagram. It may be necessary to think of the angle involved if the question wants fringes on the film rather than certain wavelengths being reinforced/destructively interfering though.
Newton's rings : In Newton's rings, there is a flat glass surface with a curved plate (think of the bottom part of a sphere cut off) placed on top of it. This means the gap between the two pieces of glass increases going further out from the centre. Light is reflected off the bottom of the curved plate (with no phase change) and off the top of the base plate (with a phase change). This means that to reinforce, the actual difference between the two distances travelled must be k+λ/2, where k is some integer. Note, this means that at the very centre there will be a dark spot, not a bright spot (as with the various slit ones above).
Since the syllabus says no experimental details will be required, most of that probably isn't necessary.
A diffraction grating is basically a series of slits, rather than two (as in Young's double slit). These slits produce much more precise lines, because rather than just requiring two beams to coincide, they require many to do so. This produces a much sharper pattern, and is more easy to analyse. If white light goes through the defraction grating, different frequencies will diffract different amounts, and so spectra will be produced. Using this the component colours of light can be found, with their exact wavelengths (because it affects the angle at which the bright bands occur). Calculations can be done with m x λ = d sin Ø = xd/D, where d is the distance between the centre of two consecutive slits, x is the bandwidth (distance between consecutive bright bands on the screen) and D is the distance to the screen.
Also relevant here is a quick explanation of the diffraction pattern for each single slit (as this 'defines an envelope on the interference patters' i.e. it shows what it will be under). There is a large wide peak of intensity in the centre, dropping to zero, followed by a series of smaller peaks of half the width of the central one. Each minima for this is defined by D sin Ø = m x λ, where D is the width of each slit. I don't know if they really want much detail on this.
11.3 Source/detector movement
Shock waves are generally formed when the source of sound waves is travelling above the speed of sound. As the plane (since it's usually a plane) approaches the speed of sound, the sound waves don't really get away from the plane, but rather build up in front of the plane. Over time, many of these waves constructively interfere, producing what is known as the sound barrier. Once the plane moves faster than this, the sound waves are left behind the plane, creating a shock wave, which follows under the plane. The angle of the shock wave can be found by taking one point to be the source, then finding where the source would have been 1 second ago. From this point, calculate how far the wave would have gone out from this point in that second, and draw in the circle. A line can then be drawn from the point to the edge of the circle (in a tangent). This will be at 90 degrees to a line from the center, and since two sides are known, the angle of the shock wave can be calculated.
Doppler effect : This effect is seen in the change in frequency of sound when either the source or the observer are moving. This, therefore, affects the actual number of waves the observer hears per second, and so changes the observed frequency. If the observer and source are moving closer together, then more wave fronts will be observed per second, and so the frequency will be higher. If they are moving apart, then fewer wave fronts will be observed, and so the frequency will be lower.
When the source is at rest, the distance between wave crests is λ. If the frequency is f, then the time (T) between crests is 1/T (=f). If we then assume that the source is moving towards the observer at vs, then in time T, the first crest has moved a distance (d) where d = vT. In the same time, the source has moved ds = vsT in the same direction. At time T, the source emits another wave, and so the distance between these two will be d - ds ... Therefore, the new wavelength will be d - ds. This can be expressed as follows.
λ` = d - ds ( and since d = λ , and ds = vsT)
λ` = λ - ds vsT
λ` = λ - vs x λ/v
λ` = λ ( 1 - Vs/V )
The new frequency therefore is given by the following
f` = V/λ` = v / ( λ x ( 1 - Vs/V ) ) , and since V/λ = f
f` = f / ( 1 - Vs/V ) (which is the equation in the data book). If the motion is away from the observer, then Vs will be negative, making the sign in the middle positive, but this can be determined as you work out the problem if you know whether the wavelength should be higher or lower.
When the observer is moving towards the source, the problem is slightly different because the wavelength isn't actually changing, but rather the relative velocity of the waves. The speed of the wave, V` = V + V0 where V is the velocity of sound in air. Thus, f` = V`/λ = V + V0/λ. Since λ = v/f, we get the following.
f` = ( 1 + Vo/V ) f. This is for an observer moving towards the source, a sign change will be necessary as above.
These can both be applied as appropriate to solve problems.
11.4 Standing waves
Unsure about this one. Please fill us in if you know or have somewhere to look it up.
An overall graph of a standing wave will look like a sine curve superimposed over a -sine curve. At any given point in time, though, consecutive anti-nodes will be on opposite sides, so if one is up, the next will be down, then up and so on. The nodes will divide the string into equal segments, and so calculations can be done with a sort of arithmetic sequence thing.
Equation relating fundamental frequency to tension and mass per unit length.
Edward Heddle tells us that we've confused the symbols for tension and period here.
"The formula for the speed of a wave in a string is v = (T/µ)^(1/2), where T is the tension (N) in the string, and the linear density µ = mass/unit length (kg/m). This can be shown with dimensional analysis. This v can be combined with the formula v = fÎ. (N.B. T = 1/f is not the same as tension.) Fiddling around with 2l = Î, gives the fundamental as f(1) = 1/2l x (T/µ)^(1/2)."
(Originally we had the following)
First, I should mention the equation v = √(T/µ). This allows us to calculate the velocity of a wave in a given string based on T, the period and µ, the mass per meter of string. This equation can be equated to v = f x λ. We can then play around with it to get various formulae, for example, 1/µ = f3 x λ2.
As noted previously, an open end in a pipe will have an antinode, and a closed end will have an node. Therefore, a closed-closed pipe will have a half wavelength, as will an open-open pipe, but an open-closed pipe will have one quarter. These are the fundamental frequencies, then half wavelengths can be added to get the first, then second and so on harmonics. Most of the problems involve relating the length to the wavelength or frequency of the sound produced. | http://en.m.wikibooks.org/wiki/IB_Physics/Wave_Phenomena | 13 |
59 | Chem1 General Chemistry Virtual Textbook → gases → K-M classic
Molecules in motion:
introduction to kinetic-molecular theory
On this page:
The properties such as temperature, pressure, and volume, together with others dependent on them (density, thermal conductivity, etc.) are known as macroscopic properties of matter; these are properties that can be observed in bulk matter, without reference to its underlying structure or molecular nature.
By the late 19th century the atomic theory of matter was sufficiently well accepted that scientists began to relate these macroscopic properties to the behavior of the individual molecules, which are described by the microscopic properties of matter. The outcome of this effort was the kinetic molecular theory of gases. This theory applies strictly only to a hypothetical substance known as an ideal gas; we will see, however, that under many conditions it describes the behavior of real gases at ordinary temperatures and pressures quite accurately, and serves as the starting point for dealing with more complicated states of matter.
The basic tenets of the kinetic-molecular theory are as follows: (must know!)
The molecules of an ideal gas exert no attractive forces on each other, or on the walls of the container.
The molecules are in constant random motion, and as material bodies, they obey Newton's laws of motion. This means that the molecules move in straight lines (see demo illustration at the left) until they collide with each other or with the walls of the container.
Collisions are perfectly elastic; when two molecules collide, they change their directions and kinetic energies, but the total kinetic energy is conserved. Collisions are not “sticky".
If gases do in fact consist of widely-separated particles, then the observable properties of gases must be explainable in terms of the simple mechanics that govern the motions of the individual molecules.
The kinetic molecular theory makes it easy to see why a gas should exert a pressure on the walls of a container. Any surface in contact with the gas is constantly bombarded by the molecules. At each collision, a molecule moving with momentum mv strikes the surface. Since the collisions are elastic, the molecule bounces back with the same velocity in the opposite direction. This change in velocity ΔV is equivalent to an acceleration a; according to Newton's second law, a force f = ma is thus exerted on the surface of area A exerting a pressure P = f/A.
According to the kinetic molecular theory, the average kinetic energy of an ideal gas is directly proportional to the absolue temperature. Kinetic energy is the energy a body has by virtue of its motion:
As the temperature of a gas rises, the average velocity of the molecules will increase; a doubling of the temperature will increase this velocity by a factor of four. Collisions with the walls of the container will transfer more momentum, and thus more kinetic energy, to the walls. If the walls are cooler than the gas, they will get warmer, returning less kinetic energy to the gas, and causing it to cool until thermal equilibrium is reached. Because temperature depends on the average kinetic energy, the concept of temperature only applies to a statistically meaningful sample of molecules. We will have more to say about molecular velocities and kinetic energies farther on.
Kinetic molecular theory states that an increase in temperature raises the average kinetic energy of the molecules. If the molecules are moving more rapidly but the pressure remains the same, then the molecules must stay farther apart, so that the increase in the rate at which molecules collide with the surface of the container is compensated for by a corresponding increase in the area of this surface as the gas expands.
If we increase the number of gas molecules in a closed container, more of them will collide with the walls per unit time. If the pressure is to remain constant, the volume must increase in proportion, so that the molecules strike the walls less frequently, and over a larger surface area.
"Every gas is a vacuum to every other gas". This is the way Dalton stated what we now know as his law of partial pressures. It simply means that each gas present in a mixture of gases acts independently of the others. This makes sense because of one of the fundamental tenets of KMT theory that gas molecules have negligible volumes. So Gas A in mixture of A and B acts as if Gas B were not there at all. Each contributes its own pressure to the total pressure within the container, in proportion to the fraction of the molecules it represents.
The molecules of a gas are in a state of perpetual motion in which the velocity (that is, the speed and direction) of each molecule is completely random and independent of that of the other molecules. This fundamental assumption of the kinetic-molecular model helps us understand a wide range of commonly-observed phenomena.
The perfume diffuses away from its source. [
Diffusion refers to the transport of matter through a concentration gradient; the rule is that substances move (or tend to move) from regions of higher concentration to those of lower concentration. The diffusion of tea out of a teabag into water, or of perfume from a person, are common examples; we would not expect to see either process happening in reverse!
When the stopcock is opened, random motions cause each gas to diffuse into the other container. After diffusion is complete (bottom), individual molecules of both kinds continue to pass between the flasks in both directions.
It might at first seem strange that the random motions of molecules can lead to a completely predictable drift in their ultimate distribution. The key to this apparent paradox is the distinction between an individual and the population. Although we can say nothing about the fate of an individual molecule, the behavior of a large collection ("population") of molecules is subject to the laws of statistics. This is exactly analogous to the manner in which insurance actuarial tables can accurately predict the average longevity of people at a given age, but provide no information on the fate of any single person.
If a tiny hole is made in the wall of a vessel containing a gas, then the rate at which gas molecules leak out of the container will be proportional to the number of molecules that collide with unit area of the wall per second, and thus with the rms-average velocity of the gas molecules. This process, when carried out under idealized conditions, is known as effusion.
Around 1830, the English chemist Thomas Graham (1805-1869) discovered that the relative rates at which two different gases, at the same temperature and pressure, will effuse through identical openings is inversely proportional to the square root of its molar mass.
Graham's law, as this relation is known, is a simple consequence of the square-root relation between the velocity of a body and its kinetic energy.
According to the kinetic molecular theory, the molecules of two gases at the same temperature will possess the same average kinetic energy. If v1 and v2 are the average velocities of the two kinds of molecules, then at any given temperature ke1 = ke2 and
or, in terms of molar masses M,
Thus the average velocity of the lighter molecules must be greater than those of the heavier molecules, and the ratio of these velocities will be given by the inverse ratio of square roots of the molecular weights.
Although Graham's law applies exactly only when a gas diffuses into a vacuum, the law gives useful estimates of relative diffusion rates under more practical conditions, and it provides insight into a wide range of phenomena that depend on the relative average velocities of molecules of different masses.
The glass tube shown above has cotton plugs inserted at either end. The plug on the left is moistened with a few drops of aqueous ammonia, from which NH3 gas slowly escapes. The plug on the right is similarly moisted with a strong solution of hydrochloric acid, from which gaseous HCl escapes. The gases diffuse in opposite directions within the tube; at the point where they meet, they combine to form solid ammonium chloride, which appears first as a white fog and then begins to coat the inside of the tube.
NH3(g) + HCl(g) → NH4Cl(s)
a) In what part of the tube (left, right, center) will the NH4Cl first be observed?
b) If the distance between the two ends of the tube is 100 cm, how many cm from the left end of the tube will the NH4Cl first form?
a) The lighter ammonia molecules will diffuse more rapidly, so the point where the two gases meet will be somewhere in the right half of the tube.
b) The ratio of the diffusion velocities of ammonia (v1)and hydrogen chloride (v2) can be estimated from Graham's law:
We can therefore assign relative velocities of the two gases as v1 = 1.46 and v2 = 1. Clearly, the meeting point will be directly proportional to v1. It will, in fact, be proportional to the ratio v1/(v1+v2)*:
*In order to see how this ratio was deduced, consider what would happen in the three special cases in which v1=0, v2=0, and v1=v2, for which the distances (from the left end) would be 0, 50, and 100 cm, respectively. It should be clear that the simpler ratio v1/v2 would lead to absurd results.
Note that the above calculation is only an estimate. Graham's law is strictly valid only under special conditions, the most important one being that no other gases are present. Contrary to what is written in some textbooks and is often taught, Graham's law does not accurately predict the relative rates of escape of the different components of a gaseous mixture into the outside air, nor does it give the rates at which two gases will diffuse through another gas such as air. See Misuse of Graham's Laws by Stephen J. Hawkes, J. Chem. Education 1993 70(10) 836-837
One application of this principle that was originally suggested by Graham himself but was not realized on a practical basis until a century later is the separation of isotopes. The most important example is the enrichment of uranium in the production of nuclear fission fuel.
The K-25 Gaseous Diffusion Plant was one of the major sources of enriched uranium during World War II. It was completed in 1945 and employed 12,000 workers. Owing to the secrecy of the Manhatten Project, the women who operated the system were unaware of the purpose of the plant; they were trained to simply watch the gauges and turn the dials for what they were told was a "government project". [source]
Uranium consists mostly of U238, with only 0.7% of the fissionable isotope U235. Uranium is of course a metal, but it reacts with fluorine to form a gaseous hexafluoride, UF6. In the very successful gaseous diffusion process the UF6 diffuses repeatedly through a porous wall. Each time, the lighter isotope passes through a bit more rapidly then the heavier one, yielding a mixture that is minutely richer in U235. The process must be ovr a thousand times to achieve the desired degree of enrichment. The development of a large-scale gaseous diffusion plant was a key part of the U.S. development of the first atomic bomb in 1945. This process is now obsolete, having been raplaced by other methods.
Diffusion ensures that molecules will quickly distribute themselves throughout the volume occupied by the gas in a thoroughly uniform manner. The chances are virtually zero that sufficiently more molecules might momentarily find themselves near one side of a container than the other to result in an observable temporary density or pressure difference. This is a result of simple statistics. But statistical predictions are only valid when the sample population is large.
Consider what would happen if we consider extremely small volumes of space: cubes that are about 10–7 cm on each side, for example. Such a cell would contain only a few molecules, and at any one instant we would expect to find some containing more or less than others, although in time they would average out to the same value. The effect of this statistical behavior is to give rise to random fluctuations in the density of a gas over distances comparable to the dimensions of visible light waves. When light passes through a medium whose density is non-uniform, some of the light is scattered. The kind of scattering due to random density fluctuations is called Rayleigh scattering, and it has the property of affecting (scattering) shorter wavelengths more effectively than longer wavelengths. The clear sky appears blue in color because the blue (shorter wavelength) component of sunlight is scattered more. The longer wavelengths remain in the path of the sunlight, available to delight us at sunrise or sunset.
What we have been discussing is a form of what is known as fluctuation phenomena. As the animation shows, the random fluctuations in pressure of a gas on either side do not always completely cancel when the density of molecules (i.e., pressures) are quite small.
An interesting application involving several aspects of the kinetic molecular behavior of gases is the use of a gas, usually argon, to extend the lifetime of incandescent lamp bulbs. As a light bulb is used, tungsten atoms evaporate from the filament and condense on the cooler inner wall of the bulb, blackening it and reducing light output. As the filament gets thinner in certain spots, the increased electrical resistance results in a higher local power dissipation, more rapid evaporation, and eventually the filament breaks.
The pressure inside a lamp bulb must be sufficiently low for the mean free path of the gas molecules to be fairly long; otherwise heat would be conducted from the filament too rapidly, and the bulb would melt. (Thermal conduction depends on intermolecular collisions, and a longer mean free path means a lower collision frequency). A complete vacuum would minimize heat conduction, but this would result in such a long mean free path that the tungsten atoms would rapidly migrate to the walls, resulting in a very short filament life and extensive bulb blackening.
Around 1910, the General Electric Company hired Irving Langmuir as one of the first chemists to be employed as an industrial scientist in North America. Langmuir quickly saw that bulb blackening was a consequence of the long mean free path of vaporized tungsten atoms, and he showed that the addition of a small amount of argon will reduce the mean free path, increasing the probability that an outward-moving tungsten atom will collide with an argon atom. A certain proportion of these will eventually find their way back to the filament, partially reconstituting it.
Krypton would be a better choice of gas than argon, since its greater mass would be more effective in changing the direction of the rather heavy tungsten atom. Unfortunately, krypton, being a rarer gas, is around 50 times as expensive as argon, so it is used only in “premium” light bulbs. The more recently-developed halogen-cycle lamp is an interesting chemistry-based method of prolonging the life of a tungsten-filament lamp.
Some interesting light-bulb links:
- The Great Internet Light Bulb Book (all about incandescent lamps)
- Langmuir and the gas-filled incandescent lamp
- History of the halogen cycle lamp
Gases, like all fluids, exhibit a resistance to flow, a property known as viscosity. The basic cause of viscosity is the random nature of thermally-induced molecular motion. In order to force a fluid through a pipe or tube, an additional non-random translational motion must be superimposed on the thermal motion.
There is a slight problem, however. Molecules flowing near the center of the pipe collide mostly with molecules moving in the same direction at about the same velocity, but those that happen to find themselves near the wall will experience frequent collisions with the wall. Since the molecules in the wall of the pipe are not moving in the direction of the flow, they will tend to absorb more kinetic energy than they return, with the result that the gas molecules closest to the wall of the pipe lose some of their forward momentum. Their random thermal motion will eventually take them deeper into the stream, where they will collide with other flowing molecules and slow them down. This gives rise to a resistance to flow known as viscosity; this is the reason why long gas transmission pipelines need to have pumping stations every 100 km or so.
|Origin of gas viscosity. This shows the boundary region where random movements of molecules in directions other than the one of the flow (1) move toward the confining surface and temporarily adsorb to it (2). After a short time, thermal energy causes the molecule to be released (4) with most of its velocity not in the flow direction. A rapidly-flowing molecule (3) collides with it (5) and loses some of its flow velocity.|
As you know, liquids such as syrup or honey exhibit smaller viscosities at higher temperatures as the increased thermal energy reduces the influence of intermolecular attractions, thus allowing the molecules to slip around each other more easily. Gases, however, behave in just the opposite way; gas viscosity arises from collisiion-induced transfer of momentum from rapidly-moving molecules to slow ones that have been released from the boundary layer. The higher the temperature, the more rapidly the molecules move and collide with each other, so the higher the viscosity.
Everyone knows that the air pressure decreases with altitude. This effect is easily understood qualitatively through the kinetic molecular theory. Random thermal motion tends to move gas molecules in all directions equally. In the presence of a gravitational field, however, motions in a downward direction are slightly favored. This causes the concentration, and thus the pressure of a gas to be greater at lower elevations and to decrease without limit at higher elevations.
The pressure at any elevation in a vertical column of a fluid is due to the weight of the fluid above it. This causes the pressure to decrease exponentially with height.
This plot shows how the pressure of air at 25° C decreases with altitude.
Note the constant increment of altitude (6.04 km, the "half height") required to reduce the pressure by half its value. This reflects the special property of an exponential function a = ey, namely that the derivitave da/dy is just ey itself.
The exact functional relationship between pressure and altitude is known as the barometric distribution law. It is easily derived using first-year calculus. For air at 25°C the pressure Ph at any altitude is given by Ph = Po e–.11h in which Po is the pressure at sea level.
This is a form of the very common exponential decay law which we will encounter in several different contexts in this course. An exponential decay (or growth) law describes any quantity whose rate of change is directly proportional to its current value, such as the amount of money in a compound-interest savings account or the density of a column of gas at any altitude. The most important feature of any quantity described by this law is that the fractional rate of change of the quantity in question (in this case, ΔP/P or in calculus, dP/P) is a constant. This means that the increase in altitude required to reduce the pressure by half is also a constant, about 6 km in the Earth's case.
Because heavier molecules will be more strongly affected by gravity, their concentrations will fall off more rapidly with elevation. For this reason the partial pressures of the various components of the atmosphere will tend to vary with altitude. The difference in pressure is also affected by the temperature; at higher temperatures there is more thermal motion, and hence a less rapid fall-off of pressure with altitude. Owing to atmospheric convection and turbulence, these effects are not observed in the lower part of the atmosphere, but in the uppermost parts of the atmosphere the heavier molecules do tend to drift downward.
At very low pressures, mean free paths are sufficiently great that collisions between molecules become rather infrequent. Under these conditions, highly reactive species such as ions, atoms, and molecular fragments that would ordinarily be destroyed on every collision can persist for appreciable periods of time.
The most important example of this occurs at the top of the Earth's atmosphere, at an altitude of 200 km, where the pressure is about 10–7 atm. Here the mean free path will be 107 times its value at 1 atm, or about 1 m. In this part of the atmosphere, known as the thermosphere, the chemistry is dominated by species such as O, O2+ and HO which are formed by the action of intense solar ultraviolet light on the normal atmospheric gases near the top of the stratosphere. The high concentrations of electrically charged species in these regions (sometimes also called the ionosphere) reflect radio waves and are responsible for around-the-world transmission of mid-frequency radio signals.
The ion density in the lower part of the ionosphere (about 80 km altitude) is so great that the radiation from broadcast-band radio stations is absorbed in this region before these waves can reach the reflective high-altitude layers. However, the pressure in this region (known as the D-layer) is great enough that the ions recombine soon after local sunset, causing the D-layer to disappear and allowing the waves to reflect off of the upper (F-layer) part of the ionosphere. This is the reason that distant broadcast stations can only be heard at night.
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the principal assumptions of the kinetic-molecular theory. These can be divided into those that refer to the nature of the molecules themselves, and those that describe the nature of their motions: | http://www.chem1.com/acad/webtext/gas/gas_4.html | 13 |
58 | A population cannot grow forever, and will reach some kind of growth-limit eventually. How does this occur? Why do some populations grow quickly, and others only slowly? Why do populations stop growing at different sizes? Answering questions like these will force us to think about the factors influencing birth and death, and how these factors interact to determine the population’s density.
Under ideal conditions, any population can grow exponentially (that is to say, with an accelerating rate), growing faster the bigger it gets (like compound interest in a trust fund). Exponential growth may be extremely fast (e.g. bacteria provided with fresh medium), or moderately fast (mice at harvest-time in the fields), or rather slow (humans since about 1700). The rate will be set by the time it takes an individual to become a breeder, by the number of offspring produced by an individual, and by the nature and intensity of mortality factors acting upon the population, among other things. Ideal conditions, however, do not persist. As a population becomes larger, the presence and activities of its members will alter the conditions, for example depleting food faster than it is renewed.
Fast-multiplying organisms (such as a population of bacteria) do not differ much in their individual ability to access resources, and have little or no capacity to use their behaviour to beat out rival individuals. When the population becomes unsustainably large, nearly all individuals are equally (and perhaps fatally) inconvenienced by food shortage, and the density crashes abruptly. The few lucky survivors may then be able to initiate a fresh phase of growth, if the food supply recovers, but their fate will be similar. This “lid on growth”-effect is referred to as limitation, literally suggesting that there is a ceiling or upper limit to density imposed by an outside force and acting indifferently upon the individuals present. The “lid” won’t necessarily always be held in the same position over the growing population, but on average there will be a normal ceiling against which the population will bash itself.
Many factors aside from sudden food-shortage can impose limitation: the onset of bad weather conditions (cold, extreme heat, drought), disturbances like fires or floods, and in some instances predation or disease. A population whose size is set only by limitation will never achieve a stable, near-constant density, and will instead fluctuate through time in a series of spikes and crashes, often referred to as a cycle of “boom and bust”. This is an example of a relatively “r-selected” life-history.
A population in which growth is “limited-only” is in no way controlled by its own actions. The forces which end up limiting such a population would act whether the population was present, or not; consider limitation imposed by the weather – the onset of winter will kill off mosquitoes, but the winter cannot know that it is doing this, and winter will not “hold off” if mosquitoes are absent! But there are factors whose actions are influenced by the presence and state of the population, and their influence may be such as to stabilize the population’s density just as a thermostat can stabilize the temperature of air in a house.
When a factor acts at an intensity set in part by the state of the population it is affecting – i.e. when a factor’s strength depends on the size or growth-rate of the controlled population – the factor is referred to as a potentially regulatory factor. If the factor actually does produce a stable, controlled density, and maintains it over time, the process if referred to as regulation. The main difference between regulation and limitation is that regulation imposes both upper and lower limits on population density, not just a ceiling. Regulation can do this because the factor in some way “notices” the density of the population and acts accordingly.
For example: imagine a population regulated by a constant food supply, in which the stable state is that every individual manages to acquire just barely enough food to survive. If a few extra individuals were to be added to this population (by extra births, or the arrival of immigrants), the weakest would fall short of nutritional adequacy and would die, and the remaining individuals would stabilize at a density near the previous stable value. Now imagine the effect of removing a few individuals (either by accidental deaths, or emigration, or an experimenter’s action): the remaining individuals would be able to produce more offspring, because more food would be available, but this would increase density only to the previous stable value, and no further. Similarly, if food supply were to be altered, density would stabilize at the same food-per-individual value as before, either more individuals with increased food-supply or fewer with reduced supply. In this extended example, competition for food is clearly the mechanism of regulation. Other potentially-regulating forces are interspecific competition, predation, and disease – biotic factors generally.
Regulated populations, when they are at low density, grow more or less exponentially, but as they get larger the influence of their own members’ presence, and/or the increasing intensity of extrinsic regulatory forces acting upon them, prevents exponential growth from continuing. The slowing-down of growth causes the initial exponential rise to deflect until the growth-curve levels off, that is to say until the population density achieves a stable, unchanging value. The result on the density-over-time graph is a shape like a flattened S, usually referred to as a pattern of S-shaped or sigmoid growth. This is the form of growth-curve characteristic of K-selected populations, and the stable maximum density achieved at the end of the S is the carrying capacity, or K-value.
Just because a factor is biotic, this does not prove that it is regulatory. All regulatory factors are biotic, but not all biotic factors are regulatory. [“All crows are black, but not all black things are crows.”] Biotic factors at least have the potential to alter their action according to the population’s condition, though they do not necessarily do so.
Remember that all populations, whether regulated or not, are subject to limitation. Thus some populations are limited-only, some are mostly limited and a bit regulated, and some are limited and strongly regulated (perhaps by several regulatory factors, not just one).
Another way to describe the action of regulatory factors is to describe them as density-dependent in their action: they differ in intensity is such a way that they stabilize the population. Competition for food acts gently at low density (lots of food, few individuals, thus a weak force to control growth) and strongly at high density (less food relatively speaking, to the point where bare survival becomes difficult, as there are more hungry mouths present). Contagious disease spreads slowly at low density (hard to transmit when individuals are sparse) and easily at high density (perhaps aided and abetted in killing victims by starvation and other high-density problems). A density-dependent (or “DD”) mortality factor kills a greater percentage of a large population than it does a small one, and this tends to stabilize density; density-dependent birth makes fewer babies per mother in a large population than in a small one, again stabilizing but from the opposite effect. Some regulated populations are controlled mainly by DD mortality, some mainly by DD birthrates, and some by both; all will of course also be influenced by limiting factors.
Limiting factors in contrast, as described above, act in a manner which must be described as density-independent (“DI”). DI mortality factors certainly cause an increasing number of deaths as density increases, but always in proportion to the size of the population – they act much like sales-tax on items of different cost! (You pay more sales-tax on a $10 item than on a $1 item, but the increase is by a factor of 10 exactly, just like the item-price.) Thus although DI mortality factors can slow the exponential growth of a population, they cannot by themselves change it to sigmoid growth, so they cannot regulate. This is true even when a DI factor like a major catastrophe strikes a population – it may kill 95 or 99% of the individuals present, yet it is limiting only. Very limiting, to be sure, but not likely to lead to any regulated stable density; instead, it will simply set the stage for steady recovery, or it will drive the population extinct – in neither case is there regulation.
It is also possible for factors to act in a manner that varies with density, but not in the mode which promotes stability; consider an example. Imagine a house thermostat which responds to warm air by turning the furnace ON, and responds to cool air by turning the furnace OFF. Such a device, far from creating a stable temperature, would either burn the house down, or allow it to freeze solid! It would be responding differentially to temperature, but “the wrong way around”, promoting instability. In ecological terms this would be analogous to a factor killing a smaller fraction of a large than a small population, or one promoting more births per female in a crowded population than in a sparse one. We refer to such factors as inverse-density-dependent (“IDD”) factors. Such factors obviously cannot regulate, and in fact are counter-regulatory, but they can be consistent with overall regulation as long as their effects are relatively weak.
We can best assess the rates of birth and death of a population as a value “per individual”, or if you prefer the fractional rate: for death, how many individuals die per individual present per unit time? This is the same as saying what fraction of individuals present die per unit time? If ten die of one hundred present in a time period, you can then say “ten per 100 die”, or equivalently “one-tenth (or ten per cent) die”, per time period. Similarly for birth, we say (for instance) twenty offspring are produced per hundred breeding females per unit time, or equivalently “one-fifth of females breed per time period”, or “twenty per cent breed per unit time”, or “on average one-fifth of an offspring is produced by each female in a time period”.
If you graph out these “fractional rates” over density, the shape of the resulting line tells you if the factor is DD or DI. A DI mortality, for example, would always kill the same fraction of individuals irrespective of density – say 10 per cent of a small population, of a medium-sized population, or of a large one. Thus the fraction, 1/10 or 10%, would stay constant with density, and the line would be parallel with the density-axis.
On the other hand, a DD mortality factor would act increasingly strongly at higher densities: it might kill 10% of a small population, 30% of a medium-sized one, and 70% of a large one. Such a line will have a positive slope with density. Of course, since a birth-rate factor has the opposite effect on density to that of a death-rate factor, a DD birth-rate will show a negative slope with density. If a population exhibits birth- and death-rate lines which cross on the graph – and to achieve this, one rate at least must vary with density – there will be a point of density at which the rates are equal, and at this density the population will have zero growth.
Be warned! – just because a crossover point exists, this does not necessarily mean the population will be stable at that point. The overall slopes of the birth and death rate curves must sum to a DD state, otherwise there will be no regulation and no stability. When there are DI factors, they will contribute nothing to regulation, and just increase “noise” (variance) in the population-control situation. When there are IDD factors present, they will undercut the strength of DD factors, and must be taken into consideration when evaluating the overall mode of control. Only when all factors of significance are analyzed for their density-relations can we confidently assert that a population’s growth is understood.
Whether a population is regulated or merely limited, lives at a stable consistent density or jerks all over the map in boom-and-bust chaos, this is no indication of the success of the organism! Many stable K-selected organisms like pandas and whales are highly endangered species, while the majority of spike-and-crash r-selected organisms are ineradicable pests like bacteria or mosquitoes – a population’s mode of growth and density-control is independent of how well it may do in its environment. | http://www.zoology.ubc.ca/~bio310/121T_files/06S_density.htm | 13 |
51 | General Astronomy/Protostars and Stellar Nurseries
The Birth of Protostars
Protostars are formed in star nurseries called nebulas, while nebulas are areas of higher dust and gas densities relative to the surrounding interstellar space. Nebulas are made up of mostly hydrogen and a little helium, although more massive elements and even molecules are also ubiquitous. These more massive elements and molecules are from previous stars that have died and scattered some of their remnants into the nebula. These clouds of gas and dust can span hundreds of light years across and can be formed when enough gas and dust come together to become gravitationally bound or can be formed by the explosion of stars known as planetary nebulas. If the nebula has a high density perturbation of gas and dust then gravitational contraction may become significant. If gravity becomes significant enough to pull in the dust and gas from the cloud then the material trapped by gravity will collapse into itself, this is how a protostar is formed. When gravity takes over and causes the material of the nebula to collapse in on itself this forms a sort of ball that will start to rotate. This rotation will cause gas and dust outside of the ball to start to rotate towards the ball, similar to when the drain of a tub is opened and all the rubber ducks at the other side of the tub start to move towards the drain getting caught in the whirlpool and finally sucked down the drain. This is how a protostar increases its mass; a protostar will start out as the small ball that begins to rotate (this can be thought of as the drain) as this ball rotates it will create an accretion disk (the whirlpool around the drain) this disk will suck dust and gas from the surrounding nebula and transfer it to the protostar. How fast this process happens can help determine the outcome of the new star. This process will stop when the protostar starts nuclear fusion of hydrogen. During the formation of the protostar and the accretion process the protostar becomes hotter and denser. The protostar becomes denser because the accretion disk is adding material to the star which is causing the gravity of the protostar to increase, thereby “pushing” the gas and dust from the accretion disk closer and closer towards the center of the protostar. This effect is correlated to the temperature of the protostar and as the density of the star increases so does the temperature. When the temperature at the center of the protostar reaches about 10^6 degrees Celsius it will start to fuse hydrogen. This is the start of the proton proton chain that is the main fusion that supports the star and signifies the birth of a new star. The burning of hydrogen produces a solar wave that will blow the accretion disk away from the star causing no new material to be added to the new star.
Simple Model of Protostar Formation
Above is a description of the formation of new stars. We will not look at the early physics used to describe protostar creation. The first person to study protostars was Sir James Jeans; he studied globules and molecular clouds where protostar formation has been observed. Sir Jeans studied what conditions are needed in a molecular cloud or globule to induce collapse of material to form a protostar. During Jeans life time (1877-1946) the advanced computational powers of computers were not available so he had to simplify his calculations. The major simplifications Jeans made before he began his analysis was to assume that the effects of rotation, turbulence, and magnetic fields can be neglected. These assumptions are not true but Jeans calculations give a good starting point. Jeans started with the Virial Theorem, 2K+U=0 equation 1 This theorem states that the total potential energy (U) of gravity is twice the absolute value of the total kinetic energy (K) of the system and when the two add to zero then the system is in equilibrium. This can explain when the cloud will collapse or expand. If the kinetic energy of the cloud is more than half of the potential energy then the cloud will expand and if two times the kinetic energy is less than the cloud will collapse. We can write the potential energy as,
U= -(3/5)(GM2/R) equation 2
And kinetic energy can be written as,
K= (3MkT)/(μmH) equation 3
μ= mean molecular weight
M= mass of cloud
mH= is mass of hydrogen
R= the radius of the cloud
G and k= gravitational constant and Boltzmann’s constant
rewriting R as,
R= [(3M)/(4πρ)]1/3 equation 4
ρ= the initial mass density of the cloud assumed to be constant throughout the cloud
Sir Jeans then substituted the R equation into the Potential energy equation and then put both energy equations into the virial theorem and solving for the mass he found the minimum mass required for a cloud to collapse. This is called the Jeans mass and he found the minimum radius by placing the equation for R into the Jeans mass equation and solving this is called the Jeans length. If the cloud has a mass or radius larger than the values found by these equations then the cloud will collapse.
MJ= [(5kT)/(GμmH)]3/2[3/(4πρ)]1/2 equation 5
RJ= [(15kT)/(4πGμmHρ)]1/2 equation 6
This was the first theoretical attempt to model the formation of a protostar. These equations give us good approximations of what molecular clouds will be able to form protostars. But observations of forming protostars and of molecular clouds show that the equations that Sir Jeans developed are not always true.>
Constraining factors to Sir James Jeans Model
The preceding section explained Sir Jeans’s model for the formation of protostars from the surrounding molecular clouds. Observations of molecular clouds and protostars have shown that this model is flawed. The model predicts that the entire cloud will collapse into the forming protostar; also the model predicts that if the mass or radius is higher than the Jeans mass or Jeans radius, than the cloud will collapse and form protostars. Astronomers have found molecular clouds and globules that do not follow this model very well. Observations have been made on globules and molecular clouds that have many stars forming in them and on others that are above the Jeans mass or Jeans radius and do not have a lot of protostar activity in them. Many astronomers have tried to determine what is wrong with the model and have found reasons to explain the observations made that contradict the model. One reason found was that the simplifications made by Sir Jeans could not be left out and that by including some of these previously excluded variables astronomers found that the model fit more closely to what they observed. Some of the variables that were excluded in Jeans model were cloud rotation, the presence of a magnetic field, temperature changes, mass density changes, external gas pressure, and fragmentation.
Let us look at a diffused hydrogen cloud. Assume that the temperature is 50K and that the cloud is completely hydrogen with a density of 8.4x10-19 kg/m3, and take µ to be 1. Then what is the minimum mass necessary to cause the cloud to collapse. Using equation 5 from above with the given values we find that the mass necessary for collapse is roughly 1500 solar masses. The normal diffused hydrogen cloud ranges in mass from 1-100 solar masses therefore the cloud is stable since the Jeans Mass calculated above is greater than the mass of the cloud.
Now let’s look at what happens in the center of a dense giant molecular cloud (GMC). The typical temperatures for this cloud is 10K and we will take the density to be 3x10^-17kg/m3 and take µ to be 2. Again using equation 5 we find that the Jeans Mass is now only 8 solar masses. GMCs are approximately 10 solar masses. We can now reason that GMC cores are unstable. Consequently they will form stars because the Jeans Mass is lower than the mass of the cloud. This has been proven by astronomers through observations of GMCs in our night sky.
- Think Quest. Life cycle of stars., 2009, from http://library.thinkquest.org/17940/texts/star/star.html.
- Bill Arnett. (1997). Types of nebulae., 2009, from http://astro.nineplanets.org/twn/types.html.
- Linda Hermans-Killam. The infrared universe., 2009, from http://coolcosmos.ipac.caltech.edu/cosmic_classroom/ir_tutorial/sform.html
- Ostlie, D. A., & Carroll, B. W. (2007). In Black A. R. S. (Ed.), An introduction to modern stellar astrophysics (2nd ed.)San Francisco: Addison-Wesley. | http://en.m.wikibooks.org/wiki/General_Astronomy/Protostars_and_Stellar_Nurseries | 13 |
51 | Learn something new every day More Info... by email
Students and specialists in the forestry, tree physiology, and forest ecology sectors often have to measure tree growth. To do this, they usually use a dendrometer, or special measuring band. Also known as D-tape, diameter tape is a type of dendrometer used to measure tree diameter.
Diameter measuring tapes are usually made of cloth or metal, and calibrated in units of 3.14, or an estimation of the unit pi, inches or centimeters. They are generally used to measure a tree's diameter at breast height. This is considered to be four and a half feet (137.16 centimeters) above ground. Measuring the diameter at breast height prevents the measurer from calculating the width of the tree's base, which would reflect an inaccurate diameter of the overall tree.
Though diameter tape actually measures the circumference of a tree, it easily converts to diameter. This is because circumference and diameter are related by pi. However, diameter tape does not provide completely accurate measurements, as it is assumed that the tree's inner cross-sections are made of perfect circles. Any values obtained are therefore considered an approximation.
To measure a tree, the tree stem must first be prepared prior to using the diameter tape. The surface should be free of knots, branches, or other obstructions. The circumference where the tape will be used should be sanded with a file or sandpaper. Care should be used during this process to avoid damaging the tree.
A person should make sure the diameter tape is at breast level. The tape should be perfectly level, without any kinks or twists in it to ensure the most accurate reading. The diameter side should be up, facing the measurer. When the number zero aligns with the end of the tape, the diameter is able to be read off the tape.
After taking the measurement, the circumference can then be converted to diameter. This is done by dividing the circumference by the approximation of the unit pi, 3.14. Mathematicians will recognize the use of the equation for circumference to solve for diameter.
Appraising the diameter of a tree in conjunction with its height is important to detect the tree's wood volume. During the tree's sale for pulp, lumber, or other purposes, its volume is necessary for proper pricing and use. To estimate the volume of wood in standing trees, tree volume tables are used. Displaying average diameters and heights, these tables list diameters along a matrix to portray a quick visual display of wood volume. | http://www.wisegeek.com/what-is-diameter-tape.htm | 13 |
127 | Using Voltage Dividers
The voltage divider equation is arguably the most important equation for an electrical engineer to know. At the very least, it is one of the most fundamental. Although the voltage divider technique becomes cumbersome when applied to larger circuits, no other method is faster when it comes to finding voltages in smaller circuits. This beefy voltage divider explanation was kindly donated by Ryan Eatinger. The simplest form of a voltage divider circuit is shown in Figure 1. V1 and V2 can be found using the following equations.
The voltage divider equation applies to series circuits where the current remains constant throughout the circuit. If current is constant for all resistors, then it can be taken out of the equation. This is the true advantage of the voltage divider. If there is a choice between working with currents and voltages and working only with voltages, the choice becomes obvious. Here’s how it works. Once again, we have the circuit in Figure 1. The current remains constant throughout the circuit, meaning that the current through the source equals the current through R1 equals the current through R2.
Recalling Ohm’s law, write the equation above in terms of voltage and resistance.
Ohm’s Law: –>
These equations can now be used to find V1 and V2.
The general equation for a voltage divider is given below, where Vo is the measured voltage, Vs is the source voltage, Ro is the resistance across which the voltage is measured, and RT is the equivalent resistance of the circuit. Figure 2 shows the corresponding circuit.
General Voltage Divider Equation:
A voltage divider is not always in the simple form shown so far. Recognizing a voltage divider is a skill that takes time to develop. This article introduces some variations on the basic voltage divider circuit that you may encounter. The best way to solve a voltage divider is to simplify it to the basic form shown in Figure 2. Once in this form, apply the general voltage divider equation to find the desired voltage.
Applying the voltage divider equation to a series circuit is a fairly straightforward process. It’s simply a matter of identifying which resistors make up Ro and then adding all the resistors together to find the equivalent resistance.
Example 1: In Figure 3, there are four resistors and you’re trying to find the voltage across one of them. The resistors are all in series, making the equivalent resistance of the circuit 10 kΩ.
Example 2: When analyzing a circuit, pay attention to the orientation and location of the plusses and minuses. Voltages aren’t always across one resistor. In Figure 4, the terminal voltage is across the combination of resistors R2, R3, and R4. Adjust the Ro accordingly to include the equivalent resistance of the three resistors.
Example 3: You should also remember that voltages aren’t always measured to ground. In Figure 4, Vo is measured across resistors R2 and R3 only. Ro only includes the resistance between the plus and minus of Vo.
Circuits with Parallel Resistors:
Voltage dividers apply to resistors in series. If you encounter a circuit with resistors in parallel, you must combine any parallel resistors before applying the voltage divider equation.
Only after combining the parallel resistors will the voltage divider equation work. For this example, the parallel combination of R2, R3, and R4 combine to form the Ro in the general voltage divider equation.
Well, that should just about cover voltage dividers. As always, if you have any questions feel free to make a comment or send me or another one of the admins a message and it will be taken care of! Thanks again to Ryan Eatinger ([email protected]) for the article.
Posted by Jeff on Apr 15, 2011 in Wireless Communication Systems
Continuing on with wireless communications related subjects. The architecture of a UMTS systems is the first thing to begin understanding if you want to undertake an education such as this. It consists of three distinct components:
- The User Equipment (UE)
- The UMTS Terrestrial Radio Access Network (UTRAN)
- The Core Network (CN)
As with any wireless network out there, the main purpose is the provide access to services (data, voice, etc.) The services network is divided into Public Switched Telephone Network (PSTN) that provides voice and special telephone related services (look it up on wiki), and the internet, which provides a wide range of packet data services such as email or access to the world wide web. These things are probably all sounding very familiar to you, and they should, because they are critical in maintaining today’s society, and almost everyone living in the modern world uses at least one of these services sometimes in their daily lives.
The UMTS mobile, also known as the User Equipment (UE), interfaces with the UTRAN via the UMTS physical layer radio interface. In addition to radio access, the UE provides the subscriber with access to services and profile information. For example, the cell phone you carry in your pocket is the UE (user equipment) that interfaces with the cell phone towers that companies like Verizon, AT&T, and Sprint provide for you.
In UMTS, there are two Core Network (CN) configurations, the Circuit Switched CN (CS-CN) and Packet Switched CN (PS-CN). The CS-CN is based on the GSM Public Land Mobile Network (PLMN) and provides functions such as connectivity to the PSTN, circuit telephony services such as voice, and supplementary services such as call forwarding, call waiting, etc. The PS-CN is based on the GSM General Packet Radio System (GPRS) PLMN, which provides access to the Internet and other packet data services.
Both core networks connect to the UMTS Terrestrial Radio Access Network. The UTRAN has two options for its air interface operations. One option is Time Division Duplex (TDD), which makes use of a single 5 MHz carrier for communication between the UE and the UTRAN. The other option is the Frequency Division Duplex (FDD), which provides full duplex operation using 5 MHz of spectrum in each direction to and from the UTRAN. The following articles in the wireless communication systems section focus on the operations and design aspects of the UTRAN in FDD mode.
How do all of these components fit together? Check out the image below.
The UTRAN consists of one or more Radio Network Subsystems (RNS). An RNS consists of one Radio Network Controller (RNC) and several Nodes. The radio network controller and the nodes are two essential components of UTRAN. Apart from these two component types, UTRAN requires Operation Maintenance Centers (OMC) to perform Operation Administration and Maintenance (OA&M) functionality on the nodes and RNCs. Yes, I know the acronyms are getting a little out of hand, but it is essential to learn them if you want to speak the language! Engineers only speak to each other with acronyms and it is very very annoying indeed.
- Radio Network Controller: The Radio Network Controller is the master of UTRAN. It handles all aspects of radio resource management within the radio network subsystem. The UMTS chose to use it instead of the base station controller in order to stress the independence of UTRAN from the Core Networks (CN). It interfaces with the core network components such as the Mobile Switching Center (MSC) and Service GPRS Support Node (SGSN) to route signaling and traffic from the User Equipment (UE). The RNC also interfaces with other RNC’s within UTRAN to provide it with wide mobility (very important!)
- Nodes: Within this network, a node is the radio transmission and reception unit within UTRAN (remember above, I explained that the UE is your cell phone you carry in your pocket). It handles radio transmission and reception for multiple cells within a coverage area. So if you think about the amount of area that a certain cell tower (cell) can transmit its signal with the proper quality of service (QoS), you can get a mental grasp on multiple cells being in one coverage area, say if the towers are close together. The node implements CDMA-Specific functionality such as encoding, interleaving, spreading, scrambling & modulation. The nodes are also what used to be known as the Base Transceiver Subsystems (BTS) in second generation systems.
Hey! This article has a simple purpose, to teach you basic functionality and terminology related to modern UMTS-CDMA digital coding techniques. One swift read of this information-packed kick in the face will leave you trembling at the knees with curiosity for more. Well, maybe not. But the first step in the UMTS-CDMA system applies error correction techniques so that any errors at the receiver can be corrected. Various techniques can be employed to prevent errors at the transmitter. You have probably studied one or more of these in your days if your an electrical or computer engineering student, professor, or professional. One of these techniques is the use of Forward Error Codes, which are applied to the data before it is transmitted via the physical layer.
It is known that wireless is an inherently error-prone medium in which to operate our delicate signals. Therefore, many error correction techniques are employed. In the UMTS-CDMA systems, due to the large bandwidth available, a variety of coding techniques are employed. The following three error correcting methods come to mind:
Convolutional encoding provides the ability to correct errors at the receiver. So, the errors are removed from the signal by the receiver via convolutional encoding. As a result, lower transmission power is required, which can result in more errors. Some amount of errors can be tolerated since they can be recovered through convolutional encoding. The convolutional encoder encodes input data bits int output symbols. The data bits are entered into the first register at each clock cycle and the data bit in the last register is dumped out. Data bits are tapped at various positions and XORed to provide encoded bits. It is typically used for voice and low data rate applications. Here are the main points to keep in mind:
- Provides the ability to detect and correct errors at the receiver.
- 10^(-3) BER, typically used for voice and low data rates.
- Uses history of bits to recover from errors.
Turbo codes are a new class of error correction codes used in digital comm systems. Turbo codes have been shown to perform better for high-rate data services (which is what we crave for) with stringent error rate requirements on the order of 10-6 Bit Error Rate (BER). The turbo encoder consists of two constituent convolutional encoders. Both constituent encoders use and code the same data. The first one is fed data in the same order as the input data. The second encoder uses a permuted form of the input data and the permuting is accomplished by the use of an interleaver, which will be discussed in detail in the following post(s). Again, main points:
- 10^(-6) BER, suitable for high data rates.
- Uses convolutional encoders in parallel to increase reliability.
- Increased delays but better error correction capabilities.
Block interleaving protects data against fading and prevents bursty errors (so imagine a sudden burst in the amplitude of a received signal, think that might saturate you and ruin your signal? You bet.) This is accomplished by providing time diversity, where the bits are separated in time before transmission over the air. This is typically used with FEC codes, since FEC codes are not well suited to handle these bursty errors.
- Method to shuffle bits to prevent errors during deep fade.
- Provides time diversity.
All of these techniques will be described in detail, separately, in the following three articles.
There are several different types of 3G wireless technologies that are defined and planned to be up and working today. There are also several that are on their way. These are successors of course, to the previous 2G technologies that dominated the airwaves.
CDMA2000 is the successor to IS-95 systems. CDMA2000 provides a definition for two different options for 3G technologies. IT Differs in the amount of the frequency spectrum that is used. The Spreading Rate (SR1) operates in the 1.25 MHz band and is known as a 1x system. Another proposal exists also which is referred to as 1xEV-DO. The 1xEV-DO (1x Evolution for Data Optimized) solution is a data-only solution that enables a bandwidth of 2Mbps without any mechanism for voice. This is the type of data rate that we are all familiar with, the 3G 2Mbps speed of data connection.
The Universal Mobile Telecommunications System (UMTS)
The Universal Mobile Telecommunications System (UMTS) is a successor to GSM/GPRS systems. There are also two options for the UMTS networks. The Frequency Division Duplex (FDD) option uses spectrum bands which are paired together. For example, two different 5 MHz bands are used for uplink and downlink. The Time Division Duplex (TDD) option uses an unpaired band. In other words, the same 5 MHz band is shared between uplink and downlink for TDD.
Universal Wireless Consortium for IS-136 systems
The UWC-136 (Universal Wireless Consortium for IS-136 systems) was originally considered to be the evolution for IS-136 systems. However, the IS-136 system operators eventually decided to follow the path of CDMA2000 or UMTS.
Why did we need 3G Technology?
Back in the late 1990′s, when most of the readers out there were still playing in the sandbox, the International Telecommunication Union (ITU) set the requirements for the next generation of wireless networks (that is why they are called Third Generation (3G)). One of the many many requirements is to reach peak data rates of at least 2 Mbps. This is more relvant to the Downlink since the majority of traffic comes from the server to the client in the Internet World.
To meet this new high speed requirement, the 2nd generation wireless networks came up with several different evolutions before eventually being replaced. The GSM evolution includes GPRS and EDGE, which provide packet data services and represent intermediate solutions until a UMTS Release 99 System is deployed. The 1xEV-DO is one possible evolution path from 1xRTT, and HSDPA is a Release 5 feature of UMTS.
So how did UMTS Evolve?
UMTS is the network of choice these days. Yes, UMTS is 3G…If you haven’t caught that yet. For those nerds out there that are curious, the evolution of UMTS has progressed over the years in the following fashion:
UMTS Release 99
- 2 Mbps theoretical peak packet data rates
- 384 kbps (practical)
UMTS Release 5
- HSDPA (14 Mbps downlink theoretical)
- IMS (IP Multimedia Subsystem for multimedia)
- UP UTRAN (for scalability and lower cost)
UMTS Release 6
- HSUPA (up to 5.76 Mbps uplink)
- MBMS (Multimedia Broadcast Multicast Service)
UMTS Release 7
- Multiple Input Multiple Output (MIMO) Antenna Systems
Magic is real, and it all comes from the Fourier Integral. But one doesn’t become a wizard without a little reading first – so, the purpose of this article is to explain the Fourier Integral theoretically and mathematically.
Before reading any further, it is important to first understand this: in mathematics, there is a rule that states that any periodic function of time may be “reconstructed” exactly from the summation of an infinite series of harmonic sine-waves. The generalized theory itself is referred to as a “Fourier Series.” For use with arbitrary electronic time-domain signals of period , it may be expressed as:
over the range:
is the magnitude of the 0th harmonic
represents the magnitude of the nth harmonic of cosine wave components
represents the magnitude of the nth harmonic of sine wave components
is the fundamental frequency
is the variable that represents instances in time
is the variable that represents the specific harmonic, and is always an integer
This monumental discovery was first announced on December 21, 1807 by historic gentleman Baron Jean-Baptiste-Joseph Fourier.
In order to go from the Fourier Integral to the Fourier Transform, it is necessary to express the previous Fourier Series as a series of ever-lasting exponential functions. Using an orthogonal basis set of signals described by of magnitude , we now write the Fourier Series as:
where is .
What is the Fourier Integral?
The Fourier Integral, also referred to as Fourier Transform for electronic signals, is a mathematical method of turning any arbitrary function of time into a corresponding function of frequency. A signal, when transformed into a “function of frequency”, essentially becomes a function that expresses the relative magnitudes of each harmonic of a Fourier Series that would be summed to recreate the original time-domain signal. To see this, observe the following figures:
Figure 1. A Square Wave Pulse, in time
In order to rebuild a square wave with sines and cosines only, it is necessary to determine the magnitudes of each harmonic used in the Fourier Series, or rather, the Fourier Integral (for continuous time-domain signals). The relative magnitudes of these needed harmonics can be displayed graphically as a function of frequency (widely known as a signal’s frequency spectrum):
Figure 2. The Fourier Integral, aka Fourier Transform, of a square pulse is a Sinc function. The Sinc function is also known as the Frequency Spectrum of a Square Pulse.
Though the recreation of a signal using an infinite series of sines and cosines is impossible to achieve in the lab, one may get very close. Close enough that the most advanced lab equipment wouldn’t be able to calculate the error due to tolerance specifications. This allows engineers to use Fourier Analysis to work with time-domain signals, such as radio signals, television signals, satellite signals and just about any signal you can think of. By viewing a signal according to what frequency components are contained within it, electrical engineers may concern themselves with magnitude changes in frequency only, and may no longer worry about the signal’s magnitude-changes through time. Not only is this a very practical concept when working in the lab, it also greatly simplifies the mathematics behind signal conditioning in general. In fact, the entirety of the Communications industry owes its success to the Fourier Transform for not only antenna design, but a plethora of other applications.
The math behind the Fourier Transform
The derivations that follow have been summarized from Chapter 4 of the textbook “Signal Processing and Linear Systems” by B.P. Lathi, a fine book for students of Communication Systems.
We begin by considering some arbitrary, aperiodic time-domain signal. An example of this kind of wave would be the output of a microphone after a man speaks a few words into it. For the actual signal generated by the changes in voltage as the man spoke, we can use Fourier Analysis to describe it as a summation of exponential functions if we instead desire to reconstruct a periodic signal composed of the same voice signal repeating every seconds. For an accurate description, it is important that is long enough such that the repeating arbitrary signals do not overlap. However, if we let approach , then this “periodic” signal is simply just the voice signal (or, any general arbitrary function) in time we wanted to describe initially. Mathematically, we express:
where is the time-domain function we wish to apply the Fourier Transform on (here, the arbitrary “voice” signal). For the above equation to be true, is equal to:
It is important to note here that in practice, the shape (aka “envelope”) of a signal’s frequency spectrum is what is of main interest, and the magnitude of the components within the spectrum comes secondary. This is because amplifiers and other signal-conditioning circuits may be built to alter the magnitude in any way one wishes, and will not affect signal frequencies (so long as the circuits are LTI systems). Analyzing the envelope of a signal’s Fourier Transform allows one to use intuitive and mathematically-simplified approaches to signal-processing in general, which we shall see later. For this reason (and also as approaches ) let:
Notice that is simply without the constant multiplier , such that:
which implies that may be written:
Observation of this fact reveals insight: The shorter the period, , the larger the magnitude of the coefficients. But, on the other hand, as , the magnitudes of every frequency component approaches – which is why engineers choose to analyze spectrum envelopes. So, instead of visualizing absolute frequency magnitudes, instead consider that the frequency spectrum simply expresses the magnitude-density per unit of bandwidth, aka Hz. And since:
In the limit as we see:
which is referred to as the Fourier Integral. is referred to as the Fourier Transform of the original aperiodic function , and we express this concept as:
A fourier transform example
This example is from the same textbook as the previous derivation, and can be found on page 239.
Find the Fourier Transform of: where is an arbitrary constant.
To do this, we apply the Fourier Integral to the function as follows:
Because of the factor, we only integrate from . We simplify for:
Also, we know that . So, for , as :
Useful Fourier Transform Properties
The relationship between and exhibit beautiful symmetry that help one to develop an intuitive approach to signal analysis. Among all the concepts within electrical engineering, the properties between a time-domain function and its Fourier transform are among the most important to understand. Observe these following properties that apply for all :
1.) Fourier Transform: Gives an equation to solve for the time-domain function from .
2.) Inverse Fourier Transform: Gives an equation to solve for the frequency-domain function from .
3.) Symmetry Property: For a given pair of a time-domain signal and its Fourier transform, we note that the time-domain envelope is different in shape when compared to the frequency-domain envelope. However, switching the shape of the two functions with respect to domain (time or frequency), will result in the same envelopes except with different scaling coefficients. For example, a square pulse through time has a frequency spectrum described by a sinc function, and a sinc function through time results in a frequency spectrum described by a square pulse.
4.) Scaling Property: Time-scaling a time-domain signal (by a constant ) will result in a magnitude-and-frequency-scaling of the signal’s corresponding frequency spectrum. Also signifies that the longer a signal exists through time, the narrower the bandwidth (collection of frequency components needed to rebuild the signal) of its frequency spectrum.
5.) Time-Shifting Property: By time-shifting, or delaying/advancing, a time-domain signal results in a phase delay in each of the ever-lasting frequency-components needed to rebuild it. The frequency spectrum is otherwise unchanged – only the phase of each component is shifted.
6.) Frequency-Shifting Property: Multiplying a time-domain signal by a sinusoidal signal of some frequency , a method which begets amplitude and frequency modulation (AM/FM), results in the frequency spectrum remains unchanged except for a shift in frequency for each individual frequency component by .
Lastly, these tables (table 1, table 2) can greatly simplify Fourier analysis when used in signal processing.
Posted by Jeff on Mar 29, 2011 in Wireless Communication Systems
Cellular systems have come a long way since their introduction in the 1980s. The evolution progressed from First Generation (1G) systems to Second Generation (2G) systems. Now, Third Generation (3G) systems are being deployed.
1G systems introduced the cellular concept, in which multiple antenna sites are used to serve an area. The coverage of a single antenna site is called a cell. A cell can serve a certain number of users, and higher-system capacity can be achieved by creating more cells with smaller coverage areas. One distinguishing factor of 1G systems is that they make use of analog radio transmissions, so user information, such as voice, is never digitized. As such, they are best suited for voice communications, since data communications can be cumbersome.
The migration of 1G analog technologies toward 2G technologies began in the late 1980s and early 1990s. The primary motivation was increased system capacity. This was achieved by using more efficient digital radio techniques that enabled the transmission of digitized compressed speech signals. These digital radio techniques also supported data services with data rates as high as 14,400 bits per second (14.4 kbps) in some systems. 2G data communication is typically done using circuit-switched techniques, which are not very efficient for sending packet data such as that sent on the Internet. This inefficiency makes the use of wireless data more expensive f or the end user.
The next step in the evolution is from 2G to 3G, which started in the year 2000. The new key feature of 3G systems is the support of high-speed data services with data rates as high as 2 million bits per second (2 Mbps). Data can be transferred using packet-switching techniques rather than the circuit-switching approach. Therefore, it is more efficient and less expensive. This opens up the possibility of cost-effective Internet access, access to corporate intranets, and a host of multimedia services.
If you want to read more about the evolution of wireless networks and WCDMA radio networks in general, please stay tuned for the next several editions where I will go into details.
Upcoming including but not limited to:
- Physical layer functions
- W-CDMA Channels
- Basic call setups
- Data session setups
- Service reconfigurations
- UTRAN mobility management
- Inter-system procedures
- RF design & analysis of UMTS radio networks
- The evolution of UMTS | http://engineersphere.com/ | 13 |
69 | Ignimbrite (from the Latin for ‘Fire Cloud rock’) is a deposit of rock formed by a pyroclastic density current, or pyroclastic flow. And in a hot suspension of particles and gases.
Sometimes impact melt is mistaken for ignimbrite. And the reason for that common mistake is that at the moment of formation, and emplacement, they are both deposited in a cloud of superheated fragments, and debris. And this results in a similar brecciated, or broken, internal structure. I don’t want to digress into a discussion about volcanoes. But it is important to understand how these kinds of materials move while in a fluid state.
From How Volcanoes Work
The extraordinary velocity of a pyroclastic flow is partly attributed to its fluidization. A moving pyroclastic flow has properties more like those of a liquid than a mass of solid fragments. It’s mobility comes from the disappearance of inter-particle friction. A fluidized flow is best described as a dispersion of large fragments in a medium of fluidized fine fragments. A constant stream of hot, expanding gases keeps the smallest of the fragments (ash and lapilli size particles) in constant suspension. This solid-gas mixture can then support larger fragments that float in the matrix.
Under the present paradigm, it’s assumed that only terrestrial volcanism produces ignimbrites. And the word is commonly considered to be synonymous with volcanic tuff produced in an explosive eruption. Like the eruption of Mt Vesuvius in 79AD that destroyed the cities of Pompeii, and Herculaneum.
That eruption was described in great detail by Pliny the younger. And whose uncle, Pliny the elder, was killed by it while trying to rescue some ffriends. Hence, we get the term ‘Plinean Eruption’
In such an eruption, all of the rock erupts explosively from below the surface. And produces a huge ash column that rises sometimes miles high before collapsing. The rock, as well as the heat, and pressure, it brings with it, share the same subterranean source. The pressure begins to dissipate very quickly with any distance from the vent. And It does not provide a motive force once the volcanic materials are on the ground. Once an ash cloud has collapsed, and the material falls to Earth, the only motive forces left to provide material movement are gravity, and momentum.
Such a flow is also known as a pyroclastic density current. In order to be a fast moving density current, and not a slow and gooey flow of lava, the particles, and fragments of melted, and semi melted, stone need to be suspended, and carried along in the hot gasses of the fire cloud. Once they settle settle out, they quickly loose their momentum, and come to rest. So any given piece of ignimbrite. can be thought of as a signature of sudden, and explosive, fluid motion. And the length of time it’s in motion is never more than a few seconds.
For the purpose of understanding the patterns of movement that become frozen into the stone at the moment of its emplacement, gravity is the motive force. And it can be thought of as being in front of the material, pulling the density current down slope. Its motion can build up considerable momentum. But continuous atmospheric pressure behind the flow is not a driving force in the formation, and emplacement of a volcanogenic, gravity-driven, density current.
Under the present geological paradigm, terrestrial volcanism is thought to be the only possible source of an ignimbrite producing pyroclastic density current. And gravity is thought to be the only possible motive force to provide material movement.
But in fact, there is another far more violent way to produce a pyroclastic density current. One that is wind-driven, rather than gravity-driven. The source of the material is the ablated surface itself, not a subterranean magma chamber. And the heat, and pressure, to melt it, and get it all moving is not terrestrial volcanism either. This material is produced by a very large, geo-ablative example of the airburst that exploded in 1908 over a remote place in Siberia called Tunguska.
More than 1000 times as powerful as the the atomic bomb dropped on Hiroshima, the Tunguska blast was the largest impact event in recorded history. Yet it didn’t make a crater. In fact, it didn’t do anything the standard model of an impact event might predict. The so called “full suite” of impact markers is not to be found there. And if there had been no witnesses, the Earth sciences would be in complete denial that the violence that day came from above.
The explosion flattened an estimated 80 million trees covering 2,150 square kilometers. Yet it’s fireball never reached the ground. Only its detonation shockwave did. There is no reason to assume that the Tunguska airburst was unique. There is also no reason to think it was a very large example of an airburst event.
Mark Boslough, a scientist at Sandia National Labs, has this to say about airbursts.
“Ongoing simulations of low-altitude airbursts from hypervelocity asteroid impacts have led to a re-evaluation of the impact hazard that accounts for the enhanced damage potential relative to the standard point-source approximations. Computational models demonstrate that the altitude of maximum energy deposition is not a good estimate of the equivalent height of a point explosion, because the center of mass of an exploding projectile maintains a significant fraction of its initial momentum and is transported downward in the form of a high-temperature jet of expanding gas. This “fireball” descends to a depth well beneath the burst altitude before its velocity becomes subsonic. The time scale of this descent is similar to the time scale of the explosion itself, so the jet simultaneously couples both its translational and its radial kinetic energy to the atmosphere. Because of this downward flow, larger blast waves and stronger thermal radiation pulses are experienced at the surface than would be predicted for a nuclear explosion of the same yield at the same burst height. For impacts with a kinetic energy below some threshold value, the hot jet of vaporized projectile loses its momentum before it can make contact with the Earth’s surface. The 1908 Tunguska explosion is the largest observed example of this first type of airburst. For impacts above the threshold, the fireball descends all the way to the ground, where it expands radially, driving supersonic winds and radiating thermal energy at temperatures that can melt silicate surface materials. The Libyan Desert Glass event, 29 million years ago, may be an example of this second, larger, and more destructive type of airburst. The kinetic energy threshold that demarcates these two airburst types depends on asteroid velocity, density, strength, and impact angle.”
~Dr. Mark Boslough Sandia National Laboratory
Dr Boslough has produced a super computer generated simulation of the airburst of a 120 meter stony asteroid that is an example of the larger, ablative version of an airburst that is a must see.
In it, we see the exploding object detonating high in the atmosphere, and becoming a supersonic down draft of thermal impact plasma hotter than the surface of the sun. But watch the sequence closely. And pay particular attention to the post impact updraft at the center of the flow. And to the directions of flow of the airburst vortex at the surface, as the impact plume develops at the center of the vortex. You might want to replay it a few times.
The Tunguska fireball didn’t reach the ground. But Dr Boslough has given us a good idea of what happens if a bigger object produces a fireball that does.
<iframe width="640" height="390" src="http://www.youtube.com/embed/mCCKW_STVqE" frameborder="0" allowfullscreen></iframe>
“Simulations suggest strong coupling of thermal radiation to the ground, and efficient ablation of the resulting melt by the high-velocity shear flow.”
–Mark Boslough, Sandia Labs.
One of the best kept secrets in science today is just how good the satellite imagery available through Google Earth of the American southwest, and central Mexico has gotten in the past decade. The biggest leaps in quality have happened in the past two years. The imagery has gotten good enough to assign a directional vector to almost every pixel. And to read the directions of the fluid emplacement motions of geo-ablative flows of airburst melt like reading a dance chart.
The area depicted in the slideshow below indicates that Dr Boslough is exactly right about the geo-ablative properties of a very large airburst event. It also shows that once you figure out what to look for instead of craters, the planetary scarring of such events aren’t hard to find at all. In fact. Ablative airburst geomorphology, in pristine condition seems to be rather common in North America.
Hmmm…. There seems to be a new kind monster in the closet.
It’s called a Geo-ablative Airburst. In that simple sentence, Dr Boslough has, in effect, postulated the existence of a different kind of pyroclastic density current. The target surface becomes the source of the pyroclastic materials, instead of a volcanic system. And the heat, and pressure to melt, and move, it comes from above in the form of a Geo-Ablative Airburst. As you can see from the slideshow above, the planetary scarring of such events is not hard to spot.
All material motion across the surface of the Earth requires a motive force.
During emplacement, a volcanogenic pyroclastic flow relies on gravity for it’s motive force, and the explosive force of a powerful eruption to get everything into a superheated atmospheric suspension. Gravity provides the motive force down-slope for a volcanic pyroclastic flow. So it’s patterns of movement, and flow, will be those of a fluid seeking the lowest points in the terrains.
But Geo-Ablative melt is the product of a different kind density current, driven by atmospheric pressure in a large thermal airburst impact event. The particles, and fragments, suspended in the flow didn’t erupt from the ground, they were ablated from the surface itself by the heat, and pressure, of an airburst. The fluid motions of a pyroclastic flow of geo-ablative melt can be characterized as wind-driven. Therefore they will be seen to have moved from areas of highest pressure, to areas of lower pressure. And their patterns of movement, and flow will be reminiscent of the debris laden froth, and foam on a storm tossed beach.
The structure of a fragment of Geo-Ablative melt might be visually indistinguishable from that of ordinary volcanic tuff. But there is a simple way to identify formations of airburst melt. And to scope out good candidate locations for field studies. The different motive forces involved during emplacement, one gravity-driven, the other wind-driven, result in distinct, and easily recognizable patterns of movement, and flow.
So the final test would be in field work, and in detailed chemical analysis of suspected airburst melt.
A very strong case can be made that a region including almost all of north central Mexico, and west Texas is in fact, a single, multiple geo-ablative airburst impact zone. Here after referred to as the ‘Mexican Impact Zone’ or MIZ
I have invested well over 4,000 hours studying satellite imagery the blast effected materials of that impact zone.
In the term ‘Blast Effected Materials’ I mean to say, any material that owes its chemistry, condition, or position, to an explosive event. No matter whether volcanic tuff, or airburst melt, all ignimbrites are blast effected materials. From a forensic perspective, if you want to understand an explosive event after the fact. You study the condition, position, and chemistry, of the blast effected materials. And from that forensic perspective, the blast effected materials of the Mexican impact zone describe the single most violent natural disaster on Earth since the impact event that is caused the demise of the dinosaurs 65 million years ago.
And at first, I couldn’t imagine how evidence of the violence of a geologically recent impact event that must have effected the entire continent. And probably the climate of the entire world, could have been missed.
And then I found a paper by R.B. Firestone et al titled Evidence for an extraterrestrial impact 12,900 years ago that contributed to the megafaunal extinctions and the Younger Dryas cooling
A compelling case can be made that The Taurid Progenitor hit the Earth as a stream of tens of thousands of fragments like the Tunguska object. And accompanied by clouds of smaller fragments, and particles, down to the size of dust grains. It probably came out of a daytime sky. And it hit at a low angle of about thirty degrees, coming from the southeast. And at a velocity of about 30 kilometers per second. Only the very first fragments fell into cold atmosphere. The rest fell into already superheated atmosphere, and added to the heat and pressure. The down blasts were almost continuous until the Earth finally moved out of the orbital path of the fragmented comet’s debris stream. And the last of the fragments fell.
The process probably lasted a little more than an hour. And the resulting heat, and pressure, of the intense impact showers ablated vast areas of the surface terrains of North America like wax under a high pressure blowtorch.
The motive force for the resulting density currents of geo-ablative melt was atmospheric pressure driving the flow from behind, like the wind drives the debris laden foam, and froth, on a storm tossed beach. And that driving wind was a more than hurricane force, gusting to supersonic, hyper-thermal blast wind, of rapidly condensing thermal impact plasma. Similar, in a vastly scaled down way, to the aerosol spray of melted droplets of iron, and slag, produced by a cutting torch in a steel fabrication shop.
From a forensic blast analysis point of view, we can say that all ‘ignimbrites’, by their very nature, are a ‘Blast effected material’. It all goes to fluid mechanics, and studying how they moved, and flowed, at the time of their emplacement. Thanks to the completely different energies involved, the different modes of ignimbrite formation, and emplacement, each produces a different kind of density current. And they have distinct, and easily recognizable patterns of movement, and flow. Reading those patterns of movement, and flow is as easy as reading a dance chart. Or following spilled paint back to the can.
I would have thought that the geology of the north American continent was all very well studied. But when you start looking for any detailed research on emplacement of the pristine sheet ignimbrites in north, central Mexico, and west Texas, and the exact nature of the explosive events they were formed in. You’ll quickly find that, except for a few prospectors looking for money rocks, they are almost completely unstudied. There is much untested speculation as to their origin. But they are almost completely unmapped. And, no formal study of their fluid emplacement motions has ever been done.
Those hundreds of thousands of cubic miles of pristine ignimbrites are very clearly the blast effected materials of an explosive natural disaster far more violent than anything in many millions of years. And arguably one of the most violent events in the history of this continent. You could completely empty the Yellowstone super caldera in Wyoming, and you would have only a very small fraction of the volume of perfectly pristine ignimbrites in the Chihuahuan desert of north central Mexico. What little mention you read of them is pure speculation as to their origin. And there hasn’t been a single formal study of their emplacement.
They’re not even mapped!
After digging a little in the old literature to see what was known about them, as opposed to what was assumed, I came to the realization that, in fact, very little is known about them at all. Almost all of the geophysical research in the region is funded by mining companies prospecting for mineral resources. Basic geophysical research is something for universities to worry about I guess. But, as a result, except for about 100 kilometers, or so, along the Chihuahua City – El Paso highway, they are almost completely unmapped. And no one can provide a single map showing the location of one of those supposed rifts. That would’ve had to open, and close without a trace, like magic. Much less one that can be reconciled with the patterns of movement, and flow, at the time of emplacement, which are clearly, and legibly visible in high altitude aerial images.
There is a more than 50,000 square kilometer mega-flood of high speed,random-colliding, inter-flowing, rivers of melted stone in central Mexico, and up into west Texas that are as pristine as if they just cooled last year. When you find them described on a map, they are defined as volcanic tuff. But, of more than 350,000 cubic miles, less than 15% can be attributed to a volcano.
Waiting a lifetime for those Geologists on the ground was not an option. And, in frustration because I couldn’t get my hands on any decent research papers on the subject, I set out to work out the patterns of movement, and flow, for myself to get a better understanding of the explosive events they formed in.
That they were emplaced as a fluidized density current of blast melted stone is an empirical fact. The structure of a rock formed, and emplaced in a pyroclastic density current is not hard to recognize. But this is where the standard model for a density current gets into trouble.
According to that model, the only motive force for a density current is gravity. So, in the world according to uniformitarian model, a slope must exist for the material to flow down. That model also assumes that a volcanic vent must exist at the top of that slope. And terrestrial volcanism is thought to be the only source of melted stone, or the violently explosive heat, and pressure, required to get it into atmospheric suspension for a while. So theorists of the past had to come up with a plausible way to get so much blast melted stone up in the air into a pyroclastic density current at the same time.
Until the world was given a wake up call by watching the fragments of Shoemaker Levy 9 slam into Jupiter no 0ne could imagine that such destruction could come from above.
Their answer, and the model that most geologists have come to agree with, is a theoretical kind super giant volcanic eruption called an “Ignimbrite flair up”, when fault-grabens are thought to have transformed into vast rifts that opened up in the middle of the continent that spewed a few thousand cubic miles of ignimbrites and then closed again, without a trace. But the mantle physics required for the giant, trap door, rift vents they propose just don’t work in the real world.
And to date, there is not a single shred of tomagraphic, seismic, aeromagnetic, ground penetrating radar, or any other evidence that confirms the existence of such a rifting vent. There has never been a model for the mantle physics required for how a fault-graben suddenly turns into a rifting vent that opens, and closes without a trace . Nor has any data ever even hinted at the location of a magma chamber under Mexico big enough to account for a few hundred thousand cubic miles of eruptive material.
The sheet ignimbrites of the Chihuahuan desert, extending all the way up into west Texas, and New Mexico are on top of every thing else in perfect condition. They are the pristine capstone of the geologic column. And with the exception of the occasional sage brush here and there, they did not look much different when they were still hot, and smoking.
If you are looking for the sources of such mystery materials, and you can’t find a volcanic vent, being able to see how it was flowing, and which direction, just before it came to rest, can reveal the answer. But don’t expect the truth you read in those rocks to agree with the standard uniformitarian model. it doesn’t.
If you want to understand an explosive event after the fact, you study the condition, and patterns of movement, in the blast effected materials.
The sudden, unimaginably violent events of their formation can be understood to an amazing, and extraordinary, level of detail if one simply studies how they moved during emplacement. We need only to get enough altitude to see the actual patterns of movement, and flow, to determine the true points of origin, of a sheet of geo-ablative melt. So the science of Fluid Mechanics has the trump card.
It doesn’t get any easier than when the materials are , in pristine condition, and exposed, on the surface. And, in a hi-resolution satellite image, the motions of the ignimbrites in north central Mexico, and those in west Texas, are as easy to read as the patterns of movement, and flow, in splashes of mud, or spilled paint. You can look at a flow, and easily see which direction it was moving at any particular point.
The movements of an unconstrained fluid are defined by the forces moving it.
And for our purposes we’ll need to refine that profoundly simple observation a little more and say that there are two fundamental forces to consider; gravity, and pressure.
Take a droplet of paint, and put it on a level surface. Then blow it around with a straw. That’s a pressure driven fluid. It’s characteristic patterns of movement, and flow, are the result of the motive force being behind the flow, and pushing it. It piles up at the low pressure areas on the periphery where the pressure is no longer strong enough to move it.
Next, tip the surface a bit and let the paint flow downhill. That’ll be a gravity attracted fluid. Its patterns of movement, and flow, are consistent with the motive force being in front of the flow, and pulling it down hill. It doesn’t work on level ground.
The lines of flow in an unconstrained, and wind-driven, fluid will always be away from the driving force. Even if that fluid is melted stone being driven up hill. And when those lines of flow are frozen into a pyroclastic river of melted stone they become a permanent, reliable record of the nature of the forces that melted, and moved it.
Cover a surface with about an inch of wet, slightly sticky, grainy, mud the consistency of thin, wet, concrete. Hit it with short bursts of compressed air coming down from above to simulate the patterns of movement, and flow, in a pressure driven flow of geo-ablative melt.
A fun variation if you want you involve children is to use runny oatmeal spread out on a cookie sheet. If you have the kids surround the Cookie sheet, and blow the oatmeal around with short, random, puffs of air thru a straw. You get the same flow patterns.
The point of the second version of that experiment was two fold. I wanted to see some more examples of wind swept, pressure driven flows. And I wanted to establish that my approach to reading the material movements of wind driven fluids could be taught to others.
We are all descended from the most successful hunter-gatherers of all time. Those who couldn’t see which way their dinner went, didn’t eat. As a result, good pattern recognition skills are an innate human trait. All of the kids easily learned to recognize, and even to direct, and control, the flow patterns of a pressure driven fluid. I was quickly able to determine that it’s difficult to do focused, and objective, observation with oatmeal in your ear. And that, at the heart of this hypothesis, is the fact that this really is so simple that a child could learn to read the motions of the flowing, wind-swept, rivers of melt.
If it takes months, or years, to map a few miles along a highway from the ground it’s time to bring the work into the twenty first century and use the satellites our tax money paid for, and do it from space, or it’ll never be finished. Thanks to NASA, Land Sat, and Google, Anyone can produce their own image map of any given area on the continent. And in full spectrum color with resolution down to about 1 meter per pixel if need be. And computer memory is the only constraint to size.
I have a couple I’ve had printed professionally that cover a whole wall. If you look at a specific location anywhere in those flows, it is very easy to see which way it was flowing at any given point. And backtrack it to its source location. A sheet of clear plastic, and a handful of markers, and you have a large area, hi-resolution flow map. Complete with little directional arrows. Fluid motions that would have taken a lifetime to determine, and map, by traditional ground based surveying techniques can now be read at a glance by almost anyone with a good PC, and a copy of Google Earth.
Either that material is the geologically recent result of the largest super eruption since primates first came down out of the trees. And most of central Mexico is one giant, explosive, caldera that no one ever noticed as such. And all of the missing vents will be found… someday. (And never mind that all of the emplacement motions of the simultaneous, inter-flowing, density currents of melted stone describe a sudden, virtually instantaneous, event.) Or all of the melt is the result of the most violent ET encounter in 65 million years. And it, and its ground effects, are different from anything ever studied before.
Both are pretty extraordinary possibilities. The visual evidence is more supportive of the latter though. Because, when viewed from high altitude, it is profoundly obvious the heat and pressure to melt, and move, all that material came from above. But, no matter what the source of the heat, and pressure was, the more than 40,000 sq km simultaneously random-colliding, and interflowing, mega-flood of blast-generated ignimbrites, at the very pinnacle of the stratigraphic column describes a geologically recent explosive event that was arguably the single most violent natural disaster in all of human existence. Yet, with the exception of a few prospectors looking for money rocks, it’s almost completely unstudied.
The blast effected materials of North Central Mexico describe a highly fragmented, and loosely grouped, cluster, about 500 km wide, like a giant, flying gravel pile. The thing would have looked like a sister to the images of the fragments of comet Linear seen here. It came in at very high velocity, and low angle of approach from the southeast. And almost all of the fragments exploded above ground like Tunguska. Except that, in Mexico, only the very first of the fragments on the leading edge fell into cold atmosphere. The rest fell into already super heated impact plasma, and just added to the heat.
The primary impact zone is a 500 by 1300 km oval that covers most of north central Mexico. And extends well up into west Texas, and New Mexico. The other impact zone is a little smaller in the great lakes region. And it extends from northern Minnesota, well up into Canada.
The patterns of movement, and flow, are dramatically obvious in high resolution, high altitude, aerial photos. And a careful study of the fluid motions of the sheet ignimbrites of north central Mexico, and west Texas quickly reveals, in exquisite detail, a thermal explosive process of simultaneous ablation, and emplacement, which is completely inconceivable from the strictly ground based observations, standard theory viewpoint of the past: The melt was pressure driven from behind like frothing whitewater waves on a stormy beach by atmospheric forces alone. And it did not come out of it’s source locations in an eruptive event. It was flash melted, and blown off of it’s source locations in. And ‘Fire cloud rock’ is a bit of an understatement. But it’s still an excellent descriptive.
Much of the literature assumes multiple eruptive events for the ignimbrites of the Chihuahuan Desert. But again high altitude imagery shouts the truth. No matter what the source of heat, and pressure that melted, and moved them; whether volcanogenic, or the result of a thermal ablative airburst event, all ignimbrites are a fluid in motion at the time of their emplacement. And they solidify very quickly upon coming to rest. We can know that any given fragment of ignimbrite was only in a fluid state for a few violent seconds at most. So if two flows of melted stone are representative of two separate events, even a separation of only a few seconds, then one of them will be seen to be over-topping the other, already solidified one. But if they were both melted, and flowing at the same time, the interaction between the two will be a fluid convergence. i.e. They will inter-finger. Or they will come together like two rivers flowing into one.
Everywhere, in all of the tens of thousands of square kilometers of random, colliding, flows of pristine surface ignimbrites you’ll note that, without exception, the patterns of movement in all of the material is consistent with very fast, and sudden, motion like ejecta.
Every interaction between colliding flows can be described as a fluid convergence, such as two rivers flowing into one.There is not one, single, over-topping flow. The inescapable conclusion is that contrary to the old literature, all of the pristine, wind-driven, ignimbrites in the Chihuahuan Desert were in rapid, fluid, motion at the very same time.
All of that blast melted stone describes an intricate, almost infinite, dance of violent fluid motions. And all of those turbulent, inter-flowing, motions describe the very same moment.
A gravity attracted fluid will always flow to the lowest elevation. The motive force is in front of it, pulling it down slope. But an unconfined fluid which is driven across a fairly level surface by a wind blowing over it from behind will behave differently. It moves from the area of highest pressure to the lowest. With distance from its pressure source, its motion slows, and it piles up at the areas of lowest pressure. In the case of ignimbrites born of a thermal ablative airburst event. Identifying the source locations for the materials is easy. You simply backtrack the flows to the bare places behind them where there are no ignimbrites.
At the source locations, the orogenies, and other blasted landforms, of the region have been assumed to be very ancient. Because they appear to be heavily eroded. The trouble with that standard assumption is a serious lack of alluvium covering the ignimbrites, which are in almost perfect condition, except for the occasional sagebrush, or cactus, growing in the cracks. But no one could have imagined that these landforms are in fact heavily ablated by a thermal ablative process that produced, and emplaced the ignimbrites in seconds. The landforms of north central Mexico, and West Texas aren’t heavily eroded. They are heavily ablated.
And there’s the other problem with the standard model. The estimated age of the terrains of central Mexico, and much of the American Southwest is based on assumptions of slow, and gradual erosion. After all, it should take millions of years to wear those landforms down to a nubbin like that… Or should it?
By the standard model’s thinking, the Ignimbrite “Fair Up” happened somewhere around 25 million years ago, or in the Mid Tertiary. And the landforms arising from them have been eroding, and weathering, since that time. But that scenario requires the ignimbrites to be under the alluvium that washed down over the eons, as the forces of weather eroded those landforms so much.
And after so much time, where exposed on the surface, the ignimbrites themselves should be every bit as weathered as the landforms rising among them. Instead, they are on the surface, in pristine condition. And the alluvium they should be buried under does not exist. If we want to say the ignimbrites were emplaced 25 million years ago. And that the landforms in this image have been eroding slowly for all the time since, then we are going to need to account for the missing alluvium. And we are going to have to give a plausible explanation of how the ignimbrites have survived unchanged for so long on the surface
Every square inch of the area within the oval is a blast effected material of an explosive event.
Below is a small part of the Mexican impact zone. A close study of the motions of the ignimbrites gives one an understanding of why I have called them a ‘Rosetta Stone’. The mountains, and orogenies among them have been described as heavily eroded. And the perceived amount of erosion is the basis for their estimated ancient age. The problem with that estimate is the almost complete lack of alluvial by products of hydrologic decomposition.
These next three images are all found in the image above. The Lat, and Lon, are in the info bar. I hope you take a closer look for yourself with Google Earth. The image quality is as good as it gets. And you can zoom in and study even the tiniest details.
All the way around the mountain above, the sheets of melt were blown away from it like the froth on a storm-tossed beach. And with all the speed of an ejecta curtain. But look closely along the left side about a quarter of the way in, and you can see where two of the comet’s fragments have detonated at ground level into the still moving melt. | http://craterhunter.wordpress.com/notes-on-ignimbrite-emplacement/ | 13 |
103 | As noted in the first section of this section there are two
kinds of integrals and to this point we’ve looked at indefinite integrals. It is now time to start thinking about the
second kind of integral : Definite Integrals.
However, before we do that we’re going to take a look at the Area
Problem. The area problem is to definite
integrals what the tangent and rate of change problems are to derivatives.
The area problem will give us one of the interpretations of
a definite integral and it will lead us to the definition of the definite
To start off we are going to assume that we’ve got a
function that is positive on some interval [a,b].
What we want to do is determine the area of the region between the function
and the x-axis.
It’s probably easiest to see how we do this with an
example. So let’s determine the area
between on [0,2].
In other words, we want to determine the area of the shaded region
Now, at this point, we can’t do this exactly. However, we can estimate the area. We will estimate the area by dividing up the
interval into n subintervals each of
Then in each interval we can form a rectangle whose height
is given by the function value at a specific point in the interval. We can then find the area of each of these
rectangles, add them up and this will be an estimate of the area.
It’s probably easier to see this with a sketch of the
situation. So, let’s divide up the interval
into 4 subintervals and use the function value at the right endpoint of each
interval to define the height of the rectangle.
Note that by choosing the height as we did each of the
rectangles will over estimate the area since each rectangle takes in more area
than the graph each time. Now let’s
estimate the area. First, the width of
each of the rectangles is . The height of each rectangle is determined by
the function value at the right endpoint and so the height of each rectangle is
nothing more that the function value at the right endpoint. Here is the estimated area.
Of course taking the rectangle heights to be the function
value at the right endpoint is not our only option. We could have taken the rectangle heights to
be the function value at the left endpoint.
Using the left endpoints as the heights of the rectangles will give the
following graph and estimated area.
In this case we can see that the estimation will be an
underestimation since each rectangle misses some of the area each time.
There is one more common point for getting the heights of
the rectangles that is often more accurate.
Instead of using the right or left endpoints of each sub interval we
could take the midpoint of each subinterval as the height of each
rectangle. Here is the graph for this
So, it looks like each rectangle will over and under
estimate the area. This means that the
approximation this time should be much better than the previous two choices of
points. Here is the estimation for this
We’ve now got three estimates. For comparison’s sake the exact area is
So, both the right and left endpoint estimation did not do
all that great of a job at the estimation.
The midpoint estimation however did quite well.
Be careful to not draw any conclusion about how choosing
each of the points will affect our estimation.
In this case, because we are working with an increasing function
choosing the right endpoints will overestimate and choosing left endpoint will
If we were to work with a decreasing function we would get
the opposite results. For decreasing
functions the right endpoints will underestimate and the left endpoints will
Also, if we had a function that both increased and decreased
in the interval we would, in all likelihood, not even be able to determine if
we would get an overestimation or underestimation.
Now, let’s suppose that we want a better estimation, because
none of the estimations above really did all that great of a job at estimating
the area. We could try to find a
different point to use for the height of each rectangle but that would be cumbersome
and there wouldn’t be any guarantee that the estimation would in fact be
better. Also, we would like a method for
getting better approximations that would work for any function we would chose
to work with and if we just pick new points that may not work for other
The easiest way to get a better approximation is to take
more rectangles (i.e. increase n).
Let’s double the number of rectangles that we used and see what
happens. Here are the graphs showing the
eight rectangles and the estimations for each of the three choices for
rectangle heights that we used above.
Here are the area estimations for each of these cases.
So, increasing the number of rectangles did improve the
accuracy of the estimation as we’d guessed that it would.
Let’s work a slightly more complicated example.
Example 1 Estimate
the area between and the x-axis
using subintervals and all three cases above for
the heights of each rectangle.
First, let’s get the graph to make sure that the function
So, the graph is positive and the width of each
subinterval will be,
This means that the endpoints of the subintervals are,
Let’s first look at using the right endpoints for the
function height. Here is the graph for
Notice, that unlike the first area we looked at, the
choosing the right endpoints here will both over and underestimate the area
depending on where we are on the curve.
This will often be the case with a more general curve that the one we
initially looked at. The area
estimation using the right endpoints of each interval for the rectangle
Now let’s take a look at left endpoints for the function
height. Here is the graph.
The area estimation using the left endpoints of each
interval for the rectangle height is,
Finally, let’s take a look at the midpoints for the
heights of each rectangle. Here is the
The area estimation using the midpoint is then,
For comparison purposes the exact area is,
So, again the midpoint did a better job than the other
two. While this will be the case more
often than not, it won’t always be the case and so don’t expect this to always
Now, let’s move on to the general case. Let’s start out with on [a,b]
and we’ll divide the interval into n
subintervals each of length,
Note that the subintervals don’t have to be equal length,
but it will make our work significantly easier.
The endpoints of each subinterval are,
Next in each interval,
we choose a point . These points will define the height of the
rectangle in each subinterval. Note as
well that these points do not have to occur at the same point in each
Here is a sketch of this situation.
The area under the curve on the given interval is then
We will use summation
notation or sigma notation at
this point to simplify up our notation a little. If you need a refresher on summation notation
check out the section devoted to this in
the Extras chapter.
Using summation notation the area estimation is,
The summation in the above equation is called a Riemann Sum.
To get a better estimation we will take n larger and larger. In fact,
if we let n go out to infinity we
will get the exact area. In other words,
Before leaving this section let’s address one more
issue. To this point we’ve required the
function to be positive in our work.
Many functions are not positive however.
Consider the case of on [0,2]. If we use and the midpoints for the rectangle height we
get the following graph,
In this case let’s notice that the function lies completely
below the x-axis and hence is always
negative. If we ignore the fact that the
function is always negative and use the same ideas above to estimate the area
between the graph and the x-axis we get,
Our answer is negative as we might have expected given that
all the function evaluations are negative.
So, using the technique in this section it looks like if the
function is above the x-axis we will
get a positive area and if the function is below the x-axis we will get a negative area.
Now, what about a function that is both positive and negative in the
interval? For example, on [0,2].
Using and midpoints the graph is,
Some of the rectangles are below the x-axis and so will give negative areas while some are above the x-axis and will give positive
areas. Since more rectangles are below
the x-axis than above it looks like
we should probably get a negative area estimation for this case. In fact that is correct. Here the area estimation for this case.
In cases where the function is both above and below the x-axis the technique given in the
section will give the net area
between the function and the x-axis
with areas below the x-axis negative
and areas above the x-axis
positive. So, if the net area is
negative then there is more area under the x-axis
than above while a positive net area will mean that more of the area is above | http://tutorial.math.lamar.edu/Classes/CalcI/AreaProblem.aspx | 13 |
658 | When a programmer studies a new language, the first item of business is the language’s “arithmetic,” meaning its basic forms of data and the operations that a program can perform on this data. At the same time, we need to learn how to express data and how to express operations on data.
write down the name of a primitive operation op,
write down the arguments, separated by some space, and
write down ")".
(+ 1 2)
It is not necessary to read and understand the entire chapter in order to make progress. As soon as you sense that this chapter is slowing you down, move on to the next one. Keep in mind, though, that you may wish to return here and find out more about the basic forms of data in BSL when the going gets rough.
The rest of this chapter introduces four forms of data: numbers, strings, images, and Boolean values. It also illustrates how these forms of data are manipulated with primitive operations, often called built-in operations or primitive functions. In many cases, these manipulations involve more than one form of data.
Most people think “numbers” and “operations on numbers” when they hear “arithmetic.” “Operations on numbers” means adding two numbers to yield a third; subtracting one number from another; or even determining the greatest common divisor of two numbers. If we don’t take arithmetic too literally, we may even include the sine of an angle, rounding a real number to the closest integer, and so on.
The BSL language supports Numbers and arithmetic in all these forms. As discussed in the Prologue, an arithmetic operation such as + is used like this:
(+ 3 4)
i.e., in prefix notation form. Here are some of the operations on numbers that our language provides: +, -, *, /, abs, add1, ceiling, denominator, exact->inexact, expt, floor, gcd, log, max, numerator, quotient, random, remainder, sqr, and tan. We picked our way through the alphabet, just to show the variety of operations. Explore what these do in the interactions area, and then find out how many more there are and what they do.
If you need an operation on numbers that you know from grade school or high school, chances are that BSL knows about it, too. Guess its name and experiment in the interaction area. Say you need to compute the sin of some angle; try
> (sin 0)
When it comes to numbers, BSL programs may use natural numbers, integers, rational numbers, real numbers, and complex numbers. We assume that you have heard of the first four. The last one may have been mentioned in your high school. If not, don’t worry; while complex numbers are useful for all kinds of calculations, a novice doesn’t have to know about them.
A truly important distinction concerns the precision of numbers. For now, it is important to understand that BSL distinguishes exact numbers and inexact numbers. When it calculates with exact numbers, BSL preserves this precision whenever possible. For example, (/ 4 6) produces the precise fraction 2/3, which DrRacket can render as a proper fraction, an improper fraction, or as a mixed decimal. Play with your computer’s mouse to find the menu that changes the fraction into decimal expansion and other presentations.
Some of BSL’s numeric operations cannot produce an exact result. For example, using the sqrt operation on 2 produces an irrational number that cannot be described with a finite number of digits. Because computers are of finite size and BSL must somehow fit such numbers into the computer, it chooses an approximation: #i1.4142135623730951. As mentioned in the Prologue, the #i prefix warns novice programmers of this lack of precision. While most programming languages choose to reduce precision in this manner, few advertise it and fewer even warn programmers.
Exercise 1: The direct goal of this exercise is to create an expression that computes the distance of some specific Cartesian point (x,y) from the origin (0,0). The indirect goal is to introduce some basic programming habits, especially the use of the interactions area to develop expressions.The values for x and y are given as definitions in the definitions area (top half) of DrRacket:The expected result for these values is 5 but your expression should produce the correct result even after you change these definitions.Just in case you have not taken geometry courses or in case you forgot the formula that you encountered there, the point (x,y) has the distancefrom the origin. After all, we are teaching you how to design programs not how to be a geometer.To develop the desired expression, it is best to hit RUN and to experiment in the interactions area. The RUN action tells DrRacket what the current values of x and y are so that you can experiment with expressions that involve x and y:Once you have the expression that produces the correct result, copy it from the interactions area to the definitions area, right below the two variable definitions.
To confirm that the expression works properly, change the two definitions so that x represents 12 and y stands for 5. If you click RUN now, the result should be 13.
Your mathematics teacher would say that you defined a distance function in a naive manner. To use the function, you need to open DrRacket, edit the definitions of x and y to the desired coordinates, and click RUN. We will soon show you the right way to define functions. For now, we use this kind of exercise to remind you of the idea and to prepare you for programming with functions.
A wide-spread prejudice about computers concerns its innards. Many believe
that it is all about bits and bytes—
Programming languages are about calculating with information, and information comes in all shapes and forms. For example, a program may deal with colors, names, business letters, or conversations between people. Even though we could encode this kind of information as numbers, it would be a horrible idea. Just imagine remembering large tables of codes, such as 0 means “red” and 1 means “hello,” etc.
Instead most programming languages provide at least one kind of data that deals with such symbolic information. For now, we use BSL’s strings. Generally speaking, a String is a sequence of the characters that you can enter on the keyboard enclosed in double quotes, plus a few others, about which we aren’t concerned just yet. In Prologue: How to Program, we have seen a number of BSL strings: "hello", "world", "blue", "red", etc. The first two are words that may show up in a conversation or in a letter; the others are names of colors that we may wish to use.
> (string-append "what a " "lovely " "day" " for learning BSL")
"what a lovely day for learning BSL"
Then create an expression using string primitives that concatenates prefix and suffix and adds "_" between them. So the result for these two definitions should be "hello_world".
See exercise 1 for how to create expressions using DrRacket.
> (string-length 42)
string-length: expects a string, given 42
Then create an expression using string primitives that adds "_" at position i. In general this means the resulting string is longer than the original one; here the expected result is "hello_world".Position means i characters from the left of the string—
but computer scientists start counting at 0. Thus, the 5th letter in this example is "w", because the 0th letter is "h". Hint: when you encounter such “counting problems” you may wish to add a string of digits below str to help with counting:
See exercise 1 for how to create expressions in DrRacket.
Exercise 4: Use the same setup as in exercise 3. Then create an expression that deletes the ith position from str. Clearly this expression creates a shorter string than the given one; contemplate which values you may choose for i.
Images represent symbolic data somewhat like strings. To work with images, use the "2htdp/image" teachpack. Like strings, you used DrRacket to insert images wherever you would insert an expression into your program, because images are values just like numbers and strings.
circle produces a circle image from a radius, a mode string, and a color string;
ellipse produces an ellipse from two radii, a mode string, and a color string;
line produces a line from two points and a color string;
rectangle produces a rectangle from a width, a height, a mode string, and a color string;
text produces a text image from a string, a font size, and a color string; and
triangle produces an upward-pointing equilateral triangle from a size, a mode string, and a color string.
> (star 12 "solid" "green")
A proper understanding of the third kind of image primitives—
overlay places all the images to which it is applied on top of each other, using the default anchor point for each.
overlay/xy is like overlay but accepts two numbers—
x and y— between two image arguments. It shifts the second image by x pixels to the right and y pixels down — all with respect to the images’ anchor points. Of course, the image is shifted left for a negative x and up for a positive y.
empty-scene creates an framed rectangle of a specified width and height;
place-image places an image into a scene at a specified position. If the image doesn’t fit into the given scene, it is appropriately cropped.
add-line consumes an scene, four numbers, and a color to draw a line of that color into the given image. Again, experiment with it to find out how the four arguments work together.
(define cat )Create an expression that computes the area of the image. See exercise 1 for how to create expressions in DrRacket.
We need one last kind of primitive data before we can design programs: Boolean values. There are only two kinds of Boolean values: true and false. Programs use Boolean values for representing decisions or the status of switches.
- and not always picks the Boolean that isn’t given:
Create an expression that computes whether b1 is false or b2 is true. So in this particular case, the answer is false. (Why?)
(define x 2)
(define x 0)
Strings aren’t compared with = and its relatives. Instead, you must use string=? or string<=? or string>=? if you are ever in a position where you need to compare strings. While it is obvious that string=? checks whether the two given strings are equal, the other two primitives are open to interpretation. Look up their documentation, or experiment with them, guess, and then check in the documentation whether you guessed right.
The next few chapters introduce better expressions than if to express conditional computations and, most importantly, systematic ways for designing them.
(define cat )Create an expression that computes whether the image is "tall" or "wide". An image should be labeled "tall" if its height is larger or equal to its width; otherwise it is "wide". See exercise 1 for how to create expressions in DrRacket; as you experiment, replace the image of the cat with rectangles of your choice to ensure you know the expected answer.
Now try the following modification. Create an expression that computes whether a picture is "tall", "wide", or "square".
Furthermore, programming languages classify numbers just as mathematics
teachers do. In BSL, numbers are classified in two different
directions. The first you may know from middle school or high school:
integer?, rational?, real?, and
complex?, even if you don’t know the last
one. Evaluate (sqrt -1) in the interactions area and
take a close look at the result. Your mathematics teacher may have told you
that one doesn’t compute the square root of negative numbers. Truth is that
in mathematics and in BSL, it is acceptable to do so, and the result is a
so-called complex number. Don’t worry, though, complex numbers—
(define in "hello")Then create an expression that converts whatever in represents to a number. For a string, it determines how long the string is; for an image, it uses the area; for a number, it decrements the number, unless it is already 0 or negative; for true it uses 10 and for false 20.
See exercise 1 for how to create expressions in DrRacket.
As far as programming is concerned, arithmetic is half the game. The other
half is “algebra.” Of course, our notion of “algebra” relates to the school
notion of algebra just as much as the notion of “arithmetic” from the
preceding chapter relates to the ordinary notion of grade-school
arithmetic. What we do mean is that the creation of interesting programs
involves variables and—
From a high-level perspective, a program is a function. A program, like a function in mathematics, consumes inputs, and it produces outputs. In contrast to mathematical functions, programs work with a whole variety of data: numbers, strings, images, and so on. Furthermore, programs may not consume all of the data at once; instead a program may incrementally request more data or not, depending on what the computation needs. Last but not least, programs are triggered by external events. For example, a scheduling program in an operating system may launch a monthly payroll program on the last day of every month. Or, a spreadsheet program may react to certain events on the keyboard with filling some cells with numbers.
Definitions: While many programming languages obscure the relationship between programs and functions, BSL brings it to the fore. Every BSL programs consists of definitions, usually followed by an expression that involves those definitions. There are two kinds of definitions:
constant definitions, of the same (define AVariable AnExpression), which we encountered in the preceding chapter; and
function definitions, which come in many flavors, one of which we used in the Prologue.
write “(define (”,
write down the name of the function,
... followed by one or more variables, separated by space and ending in “)”,
write down an expression,
write down “)”.
Before we explain why these examples are silly, we need to explain what
function definitions mean. Roughly speaking, a function definition
introduces a new operation on data; put differently, it adds an operation
to our vocabulary if we think of the primitive operations as the ones that
are always available. Like a primitive operation, a defined operation
consumes inputs. The number of variables determines how many inputs—
The examples are silly because the expressions inside the functions do not involve the variables. Since variables are about inputs, not mentioning them in the expressions means that the function’s output is independent of their input. We don’t need to write functions or programs if the output is always the same.
(define x 3)
For now, the only remaining question is how a function obtains its inputs. And to this end, we need to turn to the notion of applying a function.
write down the name of a defined function f,
write down as many arguments as f consumes, separated by some space, and
write down “)”.
> (f 1)
> (f 2)
> (f "hello world")
> (f true)
f: expects 1 argument, but found none
> (f 1 2 3 4 5)
f: expects only 1 argument, but found 5
+: expects at least 2 arguments, but found none
Evaluating a function application proceeds in three steps. First, DrRacket determines the values of the argument expressions. Second, it checks that the number of arguments and the number of function parameters (inputs) are the same. If not, it signals an error. Finally, if the number of actual inputs is the number of expected inputs, DrRacket computes the value of the body of the function, with all parameters replaced by the corresponding argument values. The value of this computation is the value of the function application.
> (opening "Matthew" "Krishnamurthi")
To summarize, this section introduces the notation for function
In exercise 1 you developed the right-hand side for this function. All you really need to do is add a function header. Remember this idea in case you are ever stuck with a function. Use the recipe of exercise 1 to develop the expression in the interactions area, and then write down the function definition.
Exercise 17: Define the function bool-imply. It consumes two Boolean values, call them b1 and b2. The answer of the function is true if b1 is false or b2 is true. Note: Logicians call this imply and often they use the symbol => for this purpose. While BSL could define a function with this name, we avoid the name because it is too close to the comparison operations for numbers <= and >=, and it would thus easily be confused. See exercise 9.
Exercise 18: Define the function image-area, which computes the area of a given image. Note: The area is also the number of pixels in the picture. See exercise 5 for ideas.
Exercise 19: Define the function image-classify, which consumes an image and produces "tall" if the image is taller than it is wide, "wide" if it is wider than it is tall, or "square" if its width and height are the same. See exercise 10 for ideas.
Exercise 20: Define the function string-join, which consumes two strings and appends them with "_" in the middle. See exercise 2 for ideas.
Exercise 21: Define the function string-insert, which consumes a string and a number i and which inserts "_" at the ith position of the string. Assume i is a number between 0 and the length of the given string (inclusive). See exercise 3 for ideas. Also ponder the question whether string-insert should deal with empty strings.
Exercise 22: Define the function string-delete, which consumes a string and a number i and which deletes the ith position from str. Assume i is a number between 0 (inclusive) and the length of the given string (exclusive). See exercise 4 for ideas. Again consider the question whether string-delete can deal with empty strings.
A program rarely consists of a single function definition and an
application of that function. Instead, a typical program consists of a
“main” function or a small collection of “main event handlers.” All of
these use other functions—
(define (letter fst lst signature-name) (string-append (opening fst) "\n" (body fst lst) "\n" (closing signature-name))) (define (opening fst) (string-append "Dear " fst ",")) (define (body fst lst) (string-append "we have discovered that all people with the last name " "\n" lst " have won our lottery. So, " fst ", " "\n" "hurry and pick up your prize.")) (define (closing signature-name) (string-append "Sincerely," "\n" signature-name))
> (letter "Matthew" "Krishnamurthi" "Felleisen")
"Dear Matthew,\nwe have discovered that all people with the last name \nKrishnamurthi have won our lottery. So, Matthew, hurry \nand pick up your prize.\nSincerely,\nFelleisen"
In general, when a problem refers to distinct tasks of computation, a program should consist of one function per task and a main function that puts it all together. We formulate this idea as a simple slogan:
Define one function per task.
The advantage of following this slogan is that you get reasonably small functions, each of which is easy to comprehend, and whose composition is easy to understand. Later, we see that creating small functions that work correctly is much easier than creating one large function. Better yet, if you ever need to change a part of the program due to some change to the problem statement, it tends to be much easier to find the relevant program parts when it is organized as a collection of small functions.
Sample Problem: Imagine the owner of a movie theater who has complete freedom in setting ticket prices. The more he charges, the fewer the people who can afford tickets. In a recent experiment the owner determined a precise relationship between the price of a ticket and average attendance. At a price of $5.00 per ticket, 120 people attend a performance. Decreasing the price by a dime ($.10) increases attendance by 15. Unfortunately, the increased attendance also comes at an increased cost. Every performance costs the owner $180. Each attendee costs another four cents ($0.04). The owner would like to know the exact relationship between profit and ticket price so that he can determine the price at which he can make the highest profit.
The problem statement also specifies how the number of attendees depends on the ticket price. Computing this number is clearly a separate task and thus deserves its own function definition:
The revenue is exclusively generated by the sale of tickets, meaning it is exactly the product of ticket price and number of attendees:
The costs consist of two parts: a fixed part ($180) and a variable part that depends on the number of attendees. Given that the number of attendees is a function of the ticket price, a function for computing the cost of a show also consumes the price of a ticket and uses it to compute the number of tickets sold with attendees:
Finally, profit is the difference between revenue and costs:
Even the definition of profit suggests that we use the functions revenue and cost. Hence, the profit function must consume the price of a ticket and hand this number to the two functions it uses.
Exercise 23: Our solution to the sample problem contains several constants in the middle of functions. As One Program, Many Definitions already points out, it is best to give names to such constants so that future readers understand where these numbers come from. Collect all definitions in DrRacket’s definitions area and change them so that all magic numbers are refactored into constant definitions.
Exercise 24: Determine the potential profit for the following ticket prices: $1, $2, $3, $4, and $5. Which price should the owner of the movie theater choose to maximize his profits? Determine the best ticket price down to a dime.
Exercise 25: After studying the costs of a show, the owner discovered several ways of lowering the cost. As a result of his improvements, he no longer has a fixed cost. He now simply pays $1.50 per attendee.
Modify both programs to reflect this change. When the programs are modified, test them again with ticket prices of $3, $4, and $5 and compare the results.
batch programs, which consist of one main function, which uses auxiliary functions, which in turn use additional auxiliary functions, and so on. To launch a batch program means to call the main function on some inputs and to wait for its output.
interactive programs, which consists of several main functions, and an expression that informs the computer which of the functions takes care of which input and which of the functions produces output. Naturally, all of these functions may use auxiliary functions.
In this section we present some simple examples of both batch programs and interactive programs. Before we do so, however, we need one more ingredient: constant definitions.
write “(define ”,
write down the name of the variable,
... followed by a space and an expression,
write down “)”.
; temperature (in deg F) when water freezes: (define FREEZING 32) ; useful to compute the area of a disk: (define ALMOST-PI 3.14) ; a blank line: (define NL "\n") ; an empty scene: (define MT (empty-scene 100 100))
Batch Programs: As mentioned, a batch program consists of one main function, which performs all the computations. On rare occasions, a program is just this one function. Most of the time, though, the main function employs numerous auxiliary functions, which in turn may also use other functions.
> (letter "Robby" "Flatt" "Felleisen")
"Dear Robby,\nwe have discovered that all people with the last name \nFlatt have won our lottery. So, Robby, hurry \nand pick up your prize.\nSincerely,\nFelleisen"
> (letter "Christopher" "Columbus" "Felleisen")
"Dear Christopher,\nwe have discovered that all people with the last name \nColumbus have won our lottery. So, Christopher, hurry \nand pick up your prize.\nSincerely,\nFelleisen"
> (letter "ZC" "Krishnamurthi" "Felleisen")
"Dear ZC,\nwe have discovered that all people with the last name \nKrishnamurthi have won our lottery. So, ZC, hurry \nand pick up your prize.\nSincerely,\nFelleisen"
Programs are even more useful if you can retrieve the input from some file on your computer and deliver the output to some other file. The name batch program originates from programs in the early days of computing when a program read an entire file and created some other file, without any other intervention.
(write-file "Matthew-Krishnamurthi.txt" (letter "Matthew" "Krishnamurthi" "Felleisen"))
we have discovered that all people with the last name
Krishnamurthi have won our lottery. So, Matthew, hurry
and pick up your prize.
(define (main fst last signature-name) (write-file (string-append fst "-" last ".txt") (letter fst last signature-name)))
This first batch program requires users to actually open DrRacket and to apply the function main to three strings. With read-file, we can do even better, namely we can construct batch programs that do not rely on any DrRacket knowledge from their users.
Let us illustrate the idea with a simple program just to see how things work. Suppose we wish to create a program that converts a temperature measured on the Fahrenheit thermometer into a Celsius temperature. Don’t worry, this question isn’t a test about your physics knowledge (though you should know where to find this kind of knowledge); here is the conversion formula:
Naturally in this formula f is the Fahrenheit temperature and c is the Celsius temperature. Translating this into BSL is straightforward:
Recall that 5/9 is a number, a rational fraction to be precise, and more importantly, that c depends on the given f, which is what the function notation expresses.
> (f2c 32)
> (f2c 212)
> (f2c -40)
the function convert consumes two filenames: in for the file where the Fahrenheit temperature is found and out for where we want the Celsius result;
(read-file in) retrieves the content of the file called in as a string;
string->number turns it into a number;
f2c interprets the number as a Fahrenheit temperature and converts it into a Celsius temperature;
number->string consumes this Celsius temperature and turns it into a string;
> (convert "sample.dat" "out.dat")
(define (convert in out) (write-file out (number->string (f2c (string->number (read-file in)))))) (define (f2c f) (* 5/9 (- f 32))) (convert "sample.dat" "out.dat")
In addition to running the batch program, you should also step through the computation. Make sure that the file "sample.dat" exists and contains just a number, then click the STEP button. Doing so opens another window in which you can peruse the computational process that the call to the main function of a batch program triggers. In this case, the process follows the above outline, and it is quite instructive to see this process in action.
With the choice of a menu entry, DrRacket can also produce a so-called
executable, a stand-alone program like DrRacket
itself. Specifically, choose the entry Create Executable from the
Racket menu, and DrRacket will place a package—
Interactive Programs: No matter how you look at it, batch programs are old-fashioned and somewhat boring. Even if businesses have used them for decades to automate useful tasks, interactive programs are what people are used to and prefer over batch programs. Indeed, in this day and age, people mostly interact with programs via a keyboard and a mouse, that is, events such as key presses or mouse clicks. Furthermore, interactive programs can also react to computer-driven events, e.g., the fact that the clock has ticked or that a message has arrived from some other computer.
Launching interactive programs requires more work than launching a batch program. Specifically, an interactive program designates some function as the one that takes care of keyboard events, another function as the one that presents pictures, a third function for dealing with clock ticks, etc. Put differently, there isn’t a main function that is launched; instead there is an expression that tells the computer how to handle interaction events and the evaluation of this expression starts the program, which then computes in response to user events or computer events.
In BSL, the "universe" teachpack provides the mechanisms for specifying connections between the computer’s devices and the functions you have written. The most important mechanism is the big-bang expression. It consists of one required subexpression, which must evaluate to some piece of data, and a number of optional clauses, which determine which function deals with which event.
Copy this definition of render and the third big-bang example into the definitions area of DrRacket Then click RUN, and observe a separate window that counts down from 100 to 0. At that point, the evaluation stops and a 0 appears in the interactions area.
An explanation of this schematic expression must start with is the first,
required subexpression. The value of this first expression is installed as
a world, specifically the current world. Furthermore,
this big-bang expression tells the computer to apply the function
tock to the current world whenever the clock ticks. The result of
current world (cw)
Each current world is turned into an image with an application of render and this series of images is displayed in a separate window. Finally, the function end? is used to inspect each current world. If the result is true, the evaluation of the big-bang expression is stopped; otherwise it continues.
Coming up with big-bang expressions for interactive programs demands a different skill, namely, the skill of systematically designing a program. Indeed, you may already feel that these first two chapters are somewhat overwhelming and that they introduced just too many new concepts. To overcome this feeling, the next chapter takes a step back and explains how to design programs from scratch, especially interactive programs.
The first few chapters of this book show that learning to program requires some mastery of many concepts. On the one hand, programming needs some language, a notation for communicating what we wish to compute. The languages for formulating programs are artificial constructions, though acquiring a programming language shares some elements with acquiring a natural language: we need to understand the vocabulary of the programming language; we need to figure out its grammar; and we must know what “phrases” mean.
On the other hand, when we are learning to program, it is critical to learn how to get from a problem statement to a program. We need to determine what is relevant in the problem statement and what we can ignore. We need to understand what the program consumes, what it produces, and how it relates inputs to outputs. We must know, or find out, whether the chosen language and its libraries provide certain basic operations for the data that our program is to process. If not, we might have to develop auxiliary functions that implement these operations. Finally, once we have a program, we must check whether it actually performs the intended computation. And this might reveal all kinds of errors, which we need to be able to understand and fix.
In his book “The Mythical Man-Month” Fred Brooks describes and contrasts these forms of programming on the first pages. In addition to “garage programming” and a “programming product,” he also recognizes “component programming” and “systems programming.” This book is about the “programming products;” our next two books will cover “components” and “systems” design. All this sounds rather complex and you might wonder why we don’t just muddle our way through, experimenting here and there, and leaving it all alone when the results look decent. This approach to programming, often dubbed “garage programming,” is common and succeeds on many occasions; on some it is even the foundation for a start-up company. Nevertheless, the company cannot sell the results of the “garage effort” because they are usable only by the programmers themselves. These programs are like the first two batch programs we wrote in the preceding chapter.
In practice, a good program must come with a short write-up that explains what it does, what inputs it expects, and what it produces. Ideally, it also comes with some assurance that it actually works. Best of all the program should be connected to the problem statement in such a way that a small change to the problem statement is easy to translate into a small change to the program. Software engineers call this a “programming product.”
The word “other” also includes older versions of the programmer who usually cannot recall all the thinking that the younger version put into the production of the program.
All this extra work is necessary because programmers don’t create programs for themselves. Programmers write programs for other programmers to read, and on occasion, people run these programs to get work done. The reason is that most programs are large, complex collections of collaborating functions, and nobody can write all these functions in a day. So programmers join projects, write code, leave projects, and others take over their work. One part of the problem is that the programmer’s customers tend to change their mind about what problem they really want solved. They usually have it almost right, but more often than not, they get some details wrong. Worse, complex logical constructions such as programs almost always suffer from human errors; in short, programmers make mistakes. Eventually someone discovers these errors and programmers must fix them. They need to re-read the programs from a month ago, a year ago, or twenty years ago and change them.
In this book, we present a design recipe that integrates a step-by-step
process with a way of organizing programs around problem data. For the
readers who don’t like to stare at blank screens for a long time, this
design recipe offers a way to make progress in a systematic manner. For
those of you who teach others to design program, the design recipe is a
device for diagnosing a novice’s difficulties. For yet others, the design
recipe may just be something that they can apply to other areas, say
medicine, journalism, or engineering, because program design isn’t the
right choice for their careers. Then again, for those of you who wish to
become real programmers, the design recipe also offers a way to understand
and work on existing programs—
Information and Data: The purpose of a program is to describe a
computation, a process that leads
from collection of information to another. In a sense, a program is like
the instruction a mathematics teacher gives to grade school
students. Unlike a student, however, a program works with more than
numbers; it calculates with navigation information, looks up a person’s
address, turns on switches, or processes the state of a video game. All
this information comes from a part of the real world—
One insight from this concise description is that information plays a central role. Think of information as facts about the program’s domain. For a program that deals with a furniture catalog, a “table with five legs” or a “square table of two by two meters” are pieces of information. A game program deals with a different kind of domain, where “five” might refer to the number of pixels per clock tick that some objects travels on its way from one part of the screen to another. Or, a payroll program is likely to deal with “five deductions” and similar phrases.
For a program to process information, it must turn it into some form data, i.e., values in the programming language; then it processes the data; and once it is finished, it turns the resulting data into information again. A program may even intermingle these steps, acquiring more information from the world as needed and delivering information in between. You should recall that we apply the adjective “batch” to the plain programs and the others are called “interactive.”
We use BSL and DrRacket so that you do not have to worry about the
translation of information into data. In DrRacket’s BSL you can apply a
function directly to data and observe what it produces. As a result, we
avoid the serious chicken-and-egg problem of writing functions that
convert information into data and vice versa. For simple kinds of
information designing such program pieces is trivial; for anything other
than trivial information, you should know about parsing—
BSL and DrRacket cleanly separate these tasks so that you can focus on designing the “core” of programs and, when you have enough expertise with that, you can learn to design the rest. Indeed, real software engineers have come up with the same idea and have a fancy name for it, model-view-control (MVC), meaning a program should separate its information processing view from the data processing model. Of course, if you really wish to make your programs process information, you can always use the "batch-io" teachpack to produce complete batch programs or the "universe" teachpack to produce complete interactive programs. As a matter of fact, to give you a sense of how complete programs are designed, this book and even this chapter provide a design recipe for such programs.
Given the central role of information and data, program design must clearly
start with the connection between them. Specifically, we—
42 may refer to the number of pixels from the top margin in the domain of images;
42 may denote the number of pixels per clock tick that a simulation or game object moves;
42 may mean a temperature, on the Fahrenheit, Celsius, or Kelvin scale for the domain of physics;
42 may specify the size of some table if the domain of the program is a furniture catalog; or
42 could just count the number of chars a batch program has read.
The word “class” is a popular computer science substitute for the word “set.” In analogy to other set theory mathematics, we also say a value is some element of a class.
Since this knowledge is so important for everyone who reads the program, we often write it down in the form of comments, which we call data definitions. The purpose of a data definition is two-fold. On one hand, it names a class or a collection of data, typically using a meaningful word. On the other hand, it informs readers how to create elements of this class of data and how to decide whether some random piece of data is an element of this collection.
; Temperature is a Number. ; interp. degrees Celsius
If you happen to know that the lowest possible temperatures is approximately -274, you may wonder whether it is possible to express this knowledge in a data definition. Since data definitions in BSL are really just English descriptions of classes, you may indeed define the class of temperatures in a much more accurate manner than shown above. In this book, we use a stylized form of English for such data definitions, and the next chapter introduces the style for imposing constraints such as “larger than -274.”
At this point, you have encountered the names of some data: Number, String, Image, and Boolean values. With what you know right now, formulating a new data definition means nothing more than introducing a new name for an existing form of data, e.g., “temperature” for numbers. Even this limited knowledge, though, suffices to explain the outline of our design process.
- Articulate how you wish to represent information as data. A one-line comment suffices, e.g.,
; We use plain numbers to represent temperatures.Formulate a data definition, like the one for Temperature above, if you consider this class of data a critical matter for the success of your program.
Starting with the next section, the first step is to formulate a true data definition, because we begin to use complex forms of data to represent information.
Write down a signature, a purpose statement, and a function header.
A function signature (but always shortened to signature here) is a BSL comment that tells the readers of your design how many inputs your function consumes, from what collection of data they are drawn, and what kind of output data it produces. Here are three examples:
A purpose statement is a BSL comment that summarizes the purpose of the function in a single line. If you are ever in doubt about a purpose statement, write down the shortest possible answer to the question
- for a function that consumes one string and produces a number:
- for a function that consumes a temperature and that produces a string:As this signature points out, introducing a data definition as an alias for an existing form of data makes it easy to read the intention behind signatures.
Nevertheless, we recommend to stay away from aliasing data definitions for now. A proliferation of such names can cause quite some confusion. It takes practice to balance the need for new names and the readability of programs, and there are more important ideas to understand for now.
- for a function that consumes a number, a string, and an image and that produces an image:
what does the function compute?Every reader of your program should understand what your functions compute without having to read the function itself.
A multi-function program should also come with a purpose statement. Indeed, good programmers write two purpose statements: one for the reader who may have to modify the code and another one for the person who wishes to use the program but not read it.Finally, a header is a simplistic function definition, also called a stub. Pick a parameter per input data class in the signature; the body of the function can be any piece of data from the output class. The following three function headers match the above three signatures:Our parameter names somehow reflect what kind of data the parameter represents. In other cases, you may wish to use names that suggest the purpose of the parameter.When you formulate a purpose statement, it is often useful to employ the parameter names to clarify what is computed. For example,
At this point, you can click the RUN button and experiment with the function. Of course, the result is always the same value, which makes these experiments quite boring.
Illustrate the signature and the purpose statement with some functional examples. To construct a functional example, pick one piece of data from each input class from the signature and determine what you expect back.
Suppose you are designing a function that computes the area of a square. Clearly this function consumes the length of the square’s side and that is best represented with a (positive) number. The first process step should have produced something like this:
Add the examples between the purpose statement and the function header:
The third step is to take inventory, i.e., to understand what are the givens and what we do need to compute. For the simple functions we are considering right now, we know that they are given data via parameters. While parameters are placeholders for values that we don’t know yet, we do know that it is from this unknown data that the function must compute its result. To remind ourselves of this fact, we replace the function’s body with a template.For now, the template contains just the parameters, e.g.,The dots remind you that this isn’t a complete function, but a template, a suggestion for an organization.
The templates of this section look boring. Later, when we introduce complex forms of data, templates become interesting, too.
It is now time to code. In general, to code means to program, though often in the narrowest possible way, namely, to write executable expressions and function definitions.To us, coding means to replace the body of the function with an expression that attempts to compute from the pieces in the template what the purpose statement promises. Here is the complete definition for area-of-square:To complete the add-image function takes a bit more work than that:
; Number String Image -> Image ; add s to img, y pixels from top, 10 pixels to the left ; given: ; 5 for y, ; "hello" for s, and ; (empty-scene 100 100) for img ; expected: ; (place-image (text "hello" 10 "red") 10 5 (empty-scene 100 100)) (define (add-image y s img) (place-image (text s 10 "red") 10 y img))In particular, the function needs to turn the given string s into an image, which is then placed into the given scene.
- The last step of a proper design is to test the function on the examples that you worked out before. For now, click the RUN button and enter function applications that match the examples in the interactions area:
> (area-of-square 2)
> (area-of-square 7)
49The results must match the output that you expect; that is, you must inspect each result and make sure it is equal to what is written down in the example portion of the design. If the result doesn’t match the expected output, consider the following three possibilities:
When you do encounter a mismatch between expected results and actual values, we recommend that you first re-assure yourself that the expected result is correct. If so, assume that the mistake is in the function definition. Otherwise, fix the example and then run the tests again. If you are still encountering problems, you may have encountered the third, rather rare situation.
You miscalculated and determined the wrong expected output for some of the examples.
Alternatively, the function definition computes the wrong result. When this is the case, you have a logical error in your program, also known as a bug.
Both the examples and the function definition are wrong.
The first few of the following exercises are almost copies of previous exercise. The difference is that this time they used the word “design” not “define,” meaning you should use the design recipe to create these functions and your solutions should include the relevant pieces. (Skip the template; it is useless here.) Finally, as the title of the section says these exercises are practice exercises that you should solve to internalize the process. Until you internalize the design process, you should never skip a step; it leads to easily-avoided errors and unproductive searches for the causes of errors. There is plenty of room left in programming for complex errors. We have no need to waste our time on silly errors.
Knowledge from external domains such as mathematics, music, biology, civil engineering, art, etc. Because programmers cannot know all of the application domains of computing, they must be prepared to understand the language of a variety of application areas so that they can discuss problems with domain experts. This language is often that of mathematics, but in some cases, the programmers must create a language as they work through problems with domain experts.
And knowledge about the library functions in the chosen language. When your task is to translate a mathematical formula involving the tangent function, you need to know or guess that your chosen language comes with a function such as BSL’s tan. When, however, you need to use BSL to design image-producing functions, you should understand the possibilities of the "2htdp/image" teachpacks.
You can recognize problems that demand domain knowledge from the data definitions that you work out. As long as the data definitions use the data classes that exist in the chosen programming language, the definition of the function body (and program) mostly relies on expertise in the domain. Later, when the book introduces complex forms of data, the design of functions demands deep knowledge in computer science.
Not all programs consist of a single function definition. Some require several functions, for many you also want to use constant definitions. No matter what, it is always important to design each function of a program systematically, though both global constants and the presence of auxiliary functions change the design process a bit.
When you have defined global constants, your functions may use those global constants to compute the results from the given data. In some sense, you should add these global constants to your template, because they belong to the inventory of things that may contribute to the definition. Adding global constants to templates, however, can quickly make those templates look messy. In short, keep global constants in mind when you define functions.
The issue with multi-function programs is complex. On one hand, the design of interactive functions automatically demands the design of several functions. On the other hand, even the design of batch programs may require dealing with several different tasks. Sometimes the problem statement itself suggests different tasks; other times you will discover the need for auxiliary functions as you are in the middle of designing some function.
For all cases, we recommend keeping around a list of “desired functions” or a wish list.The term “wish list” in this context is due to Dr. John Stone. Each entry on a wish list should consist of three things: a meaningful name for the function, a signature, and a purpose statement. For the design of a batch program, put the main function on the wish list and start designing it. For the design of an interactive program, you can put the event handlers, the stop-when function, and the scene-rendering function on the list. As long as the list isn’t empty, pick a wish and design the function. If you discover during the design that you need another function, put it on the list. When the list is empty, you are done.
Testing quickly becomes a labor-intensive chore. While it is easy to run tests and discover syntactic errors (clicking the RUN button does this) and run-time errors (the application of a primitive operation to the wrong kind of data), comparing the result of an interaction with the expected result is tiresome. For complex programs, you will tend to write lots of examples and tests and you will have to compare complex (large) values. If you haven’t thought so, you will soon think that this is burdensome and perform sloppy comparisons.
At the same time, testing is a major step to discover flaws in a program. Sloppy testing quickly leads to functions with hidden problems, also known as bugs. Buggy functions then stand in the way of progress on large systems that use these functions, often in multiple ways.
For these reasons—
(check-expect (f2c -40) 40)
; Number -> Number ; convert Fahrenheit temperatures to Celsius temperatures (check-expect (f2c -40) -40) (check-expect (f2c 32) 0) (check-expect (f2c 212) 100) (define (f2c f) (* 5/9 (- f 32)))
You can place check-expect specifications above or below the
function definition that they test. When you click RUN, DrRacket
collects all check-expect specifications and evaluates them
after all function definitions have been added to the
“vocabulary” of operations. The above figure shows how to exploit this
freedom to combine the example and test step. Instead of writing down the
examples as comments, you can translate them directly into tests. When
you’re all done with the design of the function, clicking RUN
performs the test. And if you ever change the function for some
(check-expect (render 50) (place-image CAR 50 Y-CAR BACKGROUND)) (check-expect (render 200) (place-image CAR 200 Y-CAR BACKGROUND))
Because it is so useful to have DrRacket conduct the tests and not to check everything yourself manually, we immediately switch to this style of testing for the rest of the book. This form of testing is dubbed unit testing though BSL’s unit testing framework is especially tuned for novice programmers. One day you will switch to some other programming language, and one of your first tasks will be to figure out its unit testing framework.
The "universe" teachpack supports the construction of some interactive programs. Specifically, you can use the "universe" teachpack to construct so-called world programs, i.e., interactive programs that deal with clock ticks, mouse clicks, and key strokes. In order to interact with people, world programs also create images that DrRacket displays in a graphical canvas.
While the previous chapter introduces the "universe" teachpack in an ad hoc way, this section demonstrates how the design recipe helps you create world programs. The first section provides some basic knowledge about big-bang, the major construct for wiring up world programs. The second section extends this knowledge to deal with mouse clicks and key strokes. Once you have digested this terminology, you are ready to design world programs. The last section is the beginning of a series of exercises, which run through a couple of chapters in this book; take a close look and create your own favorite virtual pet.
Describing Worlds: A raw computer is a nearly useless piece of physical equipment, often called hardware because you can touch it. This equipment becomes useful once you install software, and the first piece of software is usually installed on a computer is an operating system. It has the task of managing the computer for you, including connected devices such as the monitor, the keyboard, the mouse, the speakers, and so on. When you press a key on the keyboard, the operating system runs a function that processes the key stroke. We say that the key stroke is an key event, and the function is an event handler. Similarly, when the clock ticks, the operating system runs an event handler for clock ticks and, when you perform some action with the mouse, the operating system launches the event handler for mouse clicks.
Naturally, different programs have different needs. One program may interpret key strokes as signals to control a nuclear reactor; another passes them to a word processor. To make one and the same computer work on these radically different tasks, programs install event handlers. That is, they tell the operating system to use certain functions for dealing with clock ticks and other functions for dealing with mouse clicks. If a program doesn’t need to handle key strokes, the program says nothing about key events and the operating system ignores them.
The key question is what arguments DrRacket supplies to your event handlers for key strokes, mouse clicks, and clock ticks and what kind of results it expects from these event handlers. Like a real operating system, DrRacket gives these functions access to the current state of the world. For key events and mouse events, DrRacket also supplies information about these events.
The initial state is the value of w0, because our big-bang expression says so. It also is the state that DrRacket hands to the first event handling function that it uses. DrRacket expects that this event handling function produces a new state of the world, call it w1. The point is that DrRacket keeps this result around until the second event happens. Here is a table that describes this relationship among worlds and event handling functions:
(keh w0 ...)
(keh w1 ...)
(keh w2 ...)
(keh w3 ...)
(meh w0 ...)
(meh w1 ...)
(meh w2 ...)
(meh w3 ...)
w1 is the result of (keh w0 "a"), i.e., the fourth cell in the e1 column;
w2 is the result of (cth w1), i.e., the third cell in the e2 column;
w3 is the result of (cth w2), i.e., again the third cell in the e3 column; and
w4 is the result of (meh w3 90 100 "button-down").
(define w4 (meh (cth (cth (keh w0 "a"))) 90 100 "button-down"))
In short, the sequence of events determines in which order you traverse the above tables of possible worlds to arrive at the one and only one current world for each time slot. Note that DrRacket does not touch the current world; it merely safeguards it and passes it to event handling functions when needed.
Designing Worlds: Now that you understand how big-bang works, you can focus on the truly important problem of designing world programs. As you might guess, the design starts with the data definition for the states of the “world.” To this end we assume that you have a problem statement and that you are able to imagine what the world program may display in various situations.
Sample Problem: Design a program that moves a car across the world canvas, from left to right, at the rate of three pixels per clock tick.
- For all those properties of the world that remain the same, introduce constant definitions. In BSL, we capture constants via global constant definitions. For the purpose of designing worlds, we distinguish between two kinds of constants:
“physical” constants, which describe general attributes of objects in the domain, such as the speed or velocity of an object, its color, its height, its width, its radius, etc. Of course these constants don’t really refer to physical facts, but many are analogous to physical aspects of the real world.In the context of the car animation from the previous section, the WHEEL-RADIUS constant is a “physical” constant, and its definition may look like this:Note how some constants are computed from others.
graphical constants, which are images that the program uses to create the scenes that appear on the canvas.Here are some graphical constant definitions:
(define WHL (circle WHEEL-RADIUS "solid" "black")) (define BDY (above (rectangle (/ BODY-LENGTH 2) (/ BODY-HEIGHT 2) "solid" "red") (rectangle BODY-LENGTH BODY-HEIGHT "solid" "red"))) (define SPC (rectangle WHEEL-DISTANCE 1 "solid" "white")) (define WH* (beside WHL SPC WHL)) (define CAR (underlay/xy BDY WHEEL-RADIUS BODY-HEIGHT WH*))Graphical constants are usually computed, and the computations tend to involve the physical constants. To create good looking images, you need to experiment. But, keep in mind that good images are not important to understand this book; if you have fun creating them, feel free to spend time on the task. We are happy with simple images.
Those properties that change over time or in reaction to other events make up the current state of the world. Your task is to render the possible states of the world, i.e., to develop a data definition that describes all possible states of the world. As before, you must equip this data definition with a comment that tells readers how to represent world information as data and how to interpret data as information in the world.
For the running example of an animated car, it should be obvious that the only thing that changes is its distance to the left (or right) border of the canvas. A distance is measured in numbers, meaning the following is an adequate data definition for this example.An alternative is of course to count the number of clock ticks that have passed and to use this number as the state of the world. We leave this design variant to an exercise.
Once you have a data representation for the state of the world, you need to decide which kind of interactions you wish to use for which kind of transitions from one world state to another. Depending on what you decide you need to design several or all of the following four functions:
- if your world should react to clock ticks:
- if your world should react to key strokes:
- if your world should react to mouse clicks:
- if you want the world to be rendered to the canvas:
Last but not least, you need to define a main function that puts it all together. Unlike all other functions, a main function for world programs doesn’t demand design. As a matter of fact, it doesn’t require testing. Its sole reason for existing is that you can run your world program conveniently once all tests for the event handling functions are completed.Here is our proposed main function for the sample problem:The definition assumes that you have named the clock tick handler tock and the draw function render.
In other words, the desire to design an interactive program dictates several initial entries for your wish list. Later we introduce additional entries so that you can also design world programs that deal with key presses and mouse events. After you have designed the event handling functions, you can launch an interactive program with a big-bang expression:
A note on names: Naturally, you don’t have to use the name “CarState” for the class of data that represents the states of the world; any name will do as long as you use it consistently for the signatures of the big-bang functions. Also, you don’t have to use the names tock, render, or end?; you can name these functions whatever you like, as long as you use the same names when you write down the clauses of the big-bang expression.
A note on design: Even after settling on the data definition, a careful programmer shouldn’t be completely happy. The image of the car (and a car itself) isn’t just a mathematical point without width and height. Thus, to write “the number of pixels from the left margin” is an ambiguous interpretation statement. Does this statement measure the distance between the left margin and the left end of the car? Its center point? Or even its right end? While this kind of reflection may seem far-fetched, it becomes highly relevant and on occasion life-critical in some programming domains. We ignore this issue for now and leave it to BSL’s image primitives to make the decision for us.
Good programs establish a single point of control for all aspects, not just the graphical constants. Several chapters deal with this issue.
Exercise 32: Good programmers ensure that an image such as CAR can be enlarged or reduced via a single change to a constant definition. We started the development of our car image with a single plain definition:
(define WHEEL-RADIUS 5)All other dimensions of the car and its pieces are based on the wheel’s radius. Changing WHEEL-RADIUS from 5 to 10 “doubles” the size of the car image, and setting it to 3 reduces it. This kind of program organization is dubbed single point of control, and good design employs single point of control as much as possible.
Now develop your favorite image of a car; name the image CAR. Remember to experiment and make sure you can re-size the image easily.
The rest of this section demonstrates how to apply the third design step to our sample problem. Since the car is supposed to move continuously across the canvas, and since the problem statement doesn’t mention any other action, we have two functions on our wish list: tock for dealing with a clock tick and render for creating an image from the state of the world.
> (tock 20)
> (tock 78)
; CarState -> Image ; place the car into a scene, according to the given world state (define (render ws) (empty-scene 300 50))
; CarState -> Image ; place the car into a scene, according to the given world state (define (render ws) (place-image CAR ws Y-CAR BACKGROUND))
And then, you just need to test, which means evaluating expressions such as (render 50), (render 100), (render 150), and (render 200) and making sure that the resulting image is what you want. Naturally, this is somewhat more difficult than checking that a number is what you want. For now, we and you need to rely on your eyes, which is why doing so is called an eyeball test. In the penultimate subsection, we return to the testing issue in general and the one for images in particular.
Exercise 33: Finish the sample exercise and get the program to run. That is, assuming that you have solved the exercise of creating the image of a car, define the constants Y-CAR and BACKGROUND. Then assemble the appropriate big-bang expression. When your program runs to your satisfaction, add a tree to scenery. We usedto create a tree-like shape. Also add a clause to the big-bang expression that stops the animation when the car has disappeared on the right side of the canvas.
Like the original data definition, this one also equates the states of the world with the class of numbers. Its interpretation, however, explains that the number means something entirely different.
Design functions tock and render and develop a big-bang expression so that you get once again an animation of a car traveling from left to right across the world’s canvas.
Of Mice and Characters: Before you design world programs that deal with key strokes and mouse events, it is a good idea to practice with small, nearly trivial examples to understand what the event handlers can and cannot compute. We start with a simple problem concerning key strokes:
Sample Problem: Design a program that keeps track of all key strokes. The program should display the accumulated key strokes as a red text in the 11 point font.
; AllKeys is a String. ; interp. the sequence of keys pressed since ; big-bang created the canvas
; physical constants: (define WIDTH 100) (define HEIGHT 50) ; graphical constant: (define MT (empty-scene WIDTH HEIGHT))
- the function remember to manage key strokes:
- the function show to display the current state of the world:
; AllKeys String -> AllKeys ; add ke to ak, the state of the world (check-expect (remember "hello" " ") "hello ") (check-expect (remember "hello " "w") "hello w") (define (remember ak ke) ak)
; AllKeys String -> AllKeys ; add ke to ak, the state of the world (check-expect (remember "hello" " ") "hello ") (check-expect (remember "hello " "w") "hello w") (define (remember ak ke) (string-append ak ke))
; AllKeys -> Image ; render the string as a text and place it into MT (check-expect (show "hello") (place-image (text "hello" 11 "red") 10 20 MT)) (check-expect (show "mark") (place-image (text "mark" 11 "red") 10 20 MT)) (define (show ak) (place-image (text ak 11 "red") 10 20 MT))
Exercise 36: Key event handlers are also applied to strings such as "\t" (the tab key) and "\r". Appearances are deceiving, however. These strings consists of a single character and remember therefore adds them to the end of the current world state. Read the documentation of on-key and change remember so that it also ignores all special one-character strings.
Let us look at a program that interacts with the mouse. The figure below displays the simplest such program, i.e., an interactive program that just records where the mouse events occur via small dots. It is acceptable to break the rule of separating data representations and image rendering for such experimental programs, whose purpose it is to determine how newly introduced things work. It ignores what kind of mouse event occurs, and it also ignore the first guideline about the separation of state representation and its image. Instead the program uses images as the state of the world. Specifically, the state of the world is an image that contains red dots where a mouse event occurred. When another event is signaled, the clack function just paints another dot into the current state of the world.
; AllMouseEvts is an element of Image. ; graphical constants (define MT (empty-scene 100 100)) ; clack : AllMouseEvts Number Number String -> AllMouseEvts ; add a dot at (x,y) to ws (check-expect (clack MT 10 20 "something mousy") (place-image (circle 1 "solid" "red") 10 20 MT)) (check-expect (clack (place-image (circle 1 "solid" "red") 1 2 MT) 3 3 "") (place-image (circle 1 "solid" "red") 3 3 (place-image (circle 1 "solid" "red") 1 2 MT))) (define (clack ws x y action) (place-image (circle 1 "solid" "red") x y ws)) ; show : AllMouseEvts -> AllMouseEvts ; just reveal the current world state (check-expect (show MT) MT) (define (show ws) ws)
Normal interactive programs don’t ignore the kind of mouse events that takes place. Just like the key event tracker above, they inspect the string and compute different results, depending on what kind of string they received. Designing such programs requires a bit more knowledge about BSL and a bit more insight into design than we have presented so far. And the next chapter introduces all this.
The purpose of this exercise section is to create the first two elements of a virtual pet game. It starts with just a display of a cat that keeps walking across the screen. Of course, all the walking makes the cat unhappy and its unhappiness shows. Like all pets, you can try petting, which helps some, or you can try feeding, which helps a lot more.
So let’s start with an image of our favorite cat:
Copy the cat image and paste it into DrRacket, then give the image a name with define.
Exercise 37: Design a “virtual cat” world program that continuously moves the cat from left to right, by three pixels at a time. Whenever the cat disappears on the right it should re-appear on the left.
Adjust the rendering function so that it uses one cat image or the other based on whether x coordinate is odd. Read up on odd? in the help desk.
Exercise 39: Our virtual pet game will need a gauge to show how happy the cat is. If you ignore the cat, it becomes less happy. If you pet the cat, it becomes happier. If you feed the cat, it becomes much, much happier. We feed the cat by pressing the down arrow key, and we pet it by pressing the up arrow key.
This program is separate from the cat world program in the first two exercises. Do not integrate this program with the cat program of the previous exercise; you don’t know enough yet. If you think you want to make the cat and the happiness gauge play together read the next section. Design a world program that maintains and displays a “happiness gauge” over time. With each clock tick, happiness decreases by -0.1, starting with 100, the maximum score; it never falls below 0, which is the minimum happiness score.
To show the level of happiness, we use a scene with a solid, red rectangle with a black frame. For a happiness level of 0, the red bar should be gone; for a happiness level of 100, the bar should go all the way across the scene.
Thus far you have four choices for data representation: numbers, strings, images, and Boolean values. For many problems this is enough, but there are many more for which these four collections of data in BSL (or different ones in different programming languages) don’t suffice. Put differently, programming with just the built-in collections of data is often clumsy and therefore error prone.
At a minimum, good programmers must learn to design programs with restrictions of these built-in collections. One way to restrict is to enumerate a bunch of elements from a collection and to say that these are the only ones that are going to be used for some problem. Enumerating elements works only when there is a finite number of them. To accommodate collections with “infinitely” many elements, we introduce intervals, which are collections of elements that satisfy a specific property.
Infinite may just mean “so large that enumerating the elements is entirely impractical.”
Defining enumerations and intervals means distinguishing among different kinds of elements. To distinguish in code requires conditional functions, i.e., function that choose different ways of computing results depending on the value of some argument. Both Many Ways to Compute and Mixing It Up with Booleans illustrate with examples how to write such functions. In both cases, however, there is no design system; all you have is some new construct in your favorite programming language (that’s BSL) and some examples on how to use it.
In this chapter, we introduce enumerations and intervals and discuss a general design strategy for these forms of input data. We start with a second look at the cond expression. Then we go through three different scenarios of distinct subclasses of data: enumerations, intervals, and itemizations, which mix the first two. The chapter ends with a section on the general design strategy for such situations.
A note on pragmatics: Contrast cond expressions with if expressions from Mixing It Up with Booleans. The latter distinguish one situation from all others. As such, if expressions are just much less suited for multi-situation contexts; they are best used when all we wish to say is "one or the other." We therefore always use cond for situations when we wish to remind the reader of our code that some distinct situations come directly from data definitions, i.e., our first analysis of problem statements. For other pieces of code, we use whatever construct is most convenient.
Imagine designing a function that, as part of a game-playing program, computes some good-bye sentence at the end of the game. You might come up with a definition like this one:
If you just look at the cond expression it is impossible to know which of the three cond clauses is going to be used. And that is the point of a function of course. The function deals with many different inputs here: 2, 3, 7, 18, 29, etc. For each of these inputs, it may have to proceed in a different manner. Differentiating the different classes of inputs is the purpose of the cond expression.
Exercise 41: A cond expression is really just an expression and may therefore show up in the middle of another expression:Evaluate the above expression for two distinct values of y: 100 and 210.Nesting cond expressions can eliminate common expressions. Recall the following function definition from Prologue: How to Program:
(define (create-rocket-scene.v5 h) (cond [(<= h ROCKET-CENTER-TO-BOTTOM) (place-image ROCKET 50 h MTSCN)] [(> h ROCKET-CENTER-TO-BOTTOM) (place-image ROCKET 50 ROCKET-CENTER-TO-BOTTOM MTSCN)]))As you can see, both branches of the cond expression have the following shape:
; A TrafficLight shows one of three colors: ; – "red" ; – "green" ; – "yellow" ; interp. each element of TrafficLight represents which colored ; bulb is currently turned on
; TrafficLight -> TrafficLight ; given state s, determine the next state of the traffic light (check-expect (traffic-light-next "red") "green") (define (traffic-light-next s) (cond [(string=? "red" s) "green"] [(string=? "green" s) "yellow"] [(string=? "yellow" s) "red"]))
Exercise 42: If you copy and paste the above function definition into the definitions area of DrRacket and click RUN, DrRacket highlights two of the three cond lines. This coloring tells you that your test cases do not cover all possible cases. Add enough cases to make DrRacket happy.
Exercise 43: Design a function that renders the state of a traffic light as a solid circle of the appropriate color. When you have tested this function sufficiently, enter a big-bang expression that displays your traffic light and that changes its state on every clock tick.
The main ingredient of an enumeration is that it defines a collection of data as one of a number of pieces of data. Each item of the sequence explicitly spells out which piece of data belongs to the class of data that we name. Usually, the piece of data is just shown as is; on some occasions, the item of an enumeration is an English sentence that describes a finite number of elements of pieces of data with a single phrase.
; A 1String is one of: ; – "q" ; – "w" ; – "e" ; – "r" ; – "t" ; – "y" ; ...
; Position is a Number. ; interp. distance between the left periphery and the ball ; Position KeyEvent -> Position ; compute the next location of the ball (check-expect (nxt 13 "left") 8) (check-expect (nxt 13 "right") 18) (check-expect (nxt 13 "a") 13)
When programs rely on data definitions that are defined by a programming language (such as BSL) or its libraries (such as the "universe" teachpack), it is common that they use only a part of the enumeration. To illustrate this point, let us look at a representative problem.
Sample Problem: Design an interactive program that moves a red dot left or right on a horizontal line in response keystrokes on the "left" or "right" arrow key.
Figure 10 presents two solutions to this problem. The function on the left is organized according to the basic idea of using one cond line per clause in the data definition of the input, here KeyEvent. In contrast, the right-hand side displays a function that uses the three essential lines: two for the keys that matter and one for everything else. The re-ordering is appropriate because only two of the cond-lines are relevant and they can be cleanly separated from other lines. Naturally, this kind of re-arrangement is done after the function is designed properly.
Sample Problem: Design a program that simulates the landing of a UFO.
; WorldState is a Number ; interp. height of UFO (from top) ; constants: (define WIDTH 300) (define HEIGHT 100) (define CLOSE (/ HEIGHT 3)) ; visual constants: (define MT (empty-scene WIDTH HEIGHT)) (define UFO (overlay (circle 10 "solid" "green") (rectangle 40 2 "solid" "green"))) ; WorldState -> WorldState ; compute next location of UFO (check-expect (nxt 11) 14) (define (nxt y) (+ y 3)) ; WorldState -> Image ; place UFO at given height into the center of MT (check-expect (render 11) (place-image UFO (/ WIDTH 2) 11 MT)) (define (render y) (place-image UFO (/ WIDTH 2) y MT)) ; run program run ; WorldState -> WorldState (define (main y0) (big-bang y0 (on-tick nxt) (to-draw render)))
Sample Problem: The status line should say "descending" when the UFO’s height is above one third of the height of the canvas. It should switch to "closing in" below that. And finally, when the UFO has reached the bottom of the canvas, the status should notify the player that the UFO has "landed."
In this case, we don’t have a finite enumeration of distinct elements or distinct subclasses of data. After all conceptually the interval between 0 and HEIGHT (for some number greater than 0) contains an infinite number of numbers and a large number of integers. Therefore we use intervals to superimpose some organization on the generic data definition, which just uses "numbers" to describe the class of coordinates.
An interval is a description of a class of (real or rational or integer) numbers via boundaries. The simplest interval has two boundaries: left and right. If the left boundary is to be included in the interval, we say it is a closed on the left. Similarly, a right-closed interval includes its right boundary. Finally, if an interval does not include a boundary, it is said to be open at that boundary.
Pictures of, and notations for, intervals use brackets for closed boundaries and parentheses for open boundaries. Here are four pictures of simple intervals:
[3,5] is a closed interval:
(3,5] is a left-open interval:
[3,5) is a right-open interval:
and (3,5) is an open interval:
the upper interval goes from 0 to CLOSE;
the middle one starts at CLOSE and reaches HEIGHT; and
the lower, invisible interval is just a single line at HEIGHT.
Visualizing the data definition in this manner helps with the design of functions in many ways. First, it immediately suggests how to pick examples. Clearly we want the function to work inside of all the intervals and we want the function to work properly at the ends of each interval. Second, the image as well as the data definition tell us that we need to formulate a condition that determines whether or not some "point" is within one of the intervals.
Putting the two together also raises a question, namely, how exactly the function should deal with the end points. In the context of our example, two points on the number line belong to two intervals: CLOSE belongs to the upper interval and the middle one, while HEIGHT seems to fall into the middle one and the lower one. Such overlaps usually imply problems for programs, and they should be avoided.
; WorldState -> WorldState (define (f y) (cond [(<= 0 y CLOSE) ...] [(<= CLOSE y HEIGHT) ...] [(>= y HEIGHT) ...]))
; WorldState -> Image ; add a status line to the scene create by render (check-expect (render/status 10) (place-image (text "descending" 11 "green") 10 10 (render 10))) (define (render/status y) (cond [(<= 0 y CLOSE) (place-image (text "descending" 11 "green") 10 10 (render y))] [(and (< CLOSE y) (<= y HEIGHT)) (place-image (text "closing in" 11 "orange") 10 10 (render y))] [(> y HEIGHT) (place-image (text "landed" 11 "red") 10 10 (render y))]))
Sample Problem: The status line—
positioned at (20,20)— should say "descending" when the UFO’s height is above one third of the height of the canvas. It should switch to "closing in" below that. And finally, when the UFO has reached the bottom of the canvas, the status should notify the player that the UFO has "landed."
; WorldState -> Image ; add a status line to the scene create by render (check-expect (render/status 10) (place-image (text "descending" 11 "green") 10 10 (render 10))) (define (render/status y) (place-image (cond [(<= 0 y CLOSE) (text "descending" 11 "green")] [(and (< CLOSE y) (<= y HEIGHT)) (text "closing in" 11 "orange")] [(> y HEIGHT) (text "landed" 11 "red")]) 10 10 (render y)))
An interval distinguishes different subclasses of numbers; an enumeration spells out item for item the useful elements of an existing class of data. Data definitions that use itemizations generalize intervals and enumerations. They allow the combination of any existing data classes (defined elsewhere) with each other and with individual pieces of data.
; A KeyEvent is one of: ; – 1String ; – "left" ; – "right" ; – "up" ; – "down" ; – ...
; string->number : String -> NorF ; converts the given string into a number; ; produces false if impossible
; A NorF is one of: ; – false ; – a Number
; NorF -> Number ; add 3 to the given number; 3 otherwise (check-expect (add3 false) 3) (check-expect (add3 0.12) 3.12) (define (add3 x) (cond [(boolean? x) 3] [else (+ x 3)]))
Let’s solve a somewhat more purposeful design task:
Sample Problem: Design a program that launches a rocket when the user of your program presses the space bar and displays the rising rocket. The rocket should move upward at a rate of three pixels per clock tick.
the word “height” could refer to the distance between the ground and the rocket’s point of reference, say, its center; or
it might mean the distance between the top of the canvas and the reference point.
Exercise 45: The design recipe for world programs demands that you translate information into data and vice versa to ensure a complete understanding of the data definition. In some way it is best to draw some world scenarios and to represent them with data and, conversely, to pick some data examples and to draw pictures that match them. Do so for the LR definition, including at least HEIGHT and 0 as examples.
In reality, rocket launches come with count-downs:
Sample Problem: Design a program that launches a rocket when the user presses the space bar. At that point, the simulation starts a count-down for three ticks, before it displays the scenery of a rising rocket. The rocket should move upward at a rate of three pixels per clock tick.
Now that we have the data definition, we write down the obvious physical and graphical constants:
- one for displaying the current state of the world as an image:
- one for reacting to the user’s key presses, the space bar in this problem:
- and one for reacting to clock ticks once the simulation is launched:
(check-expect (show "resting") (place-image ROCKET 10 (- HEIGHT (/ (image-height ROCKET) 2)) BACKG)) (check-expect (show -2) (place-image (text "-2" 20 "red") 10 (* 3/4 WIDTH) (place-image ROCKET 10 (- HEIGHT (/ (image-height ROCKET) 2)) BACKG))) (check-expect (show 53) (place-image ROCKET 10 53 BACKG))
A second look at the examples reveals that making examples also means making choices. Nothing in the problem statement actually demands how exactly the rocket is displayed before it is launched but doing so is natural. Similarly, nothing says to display a number during the count down, but it adds a nice touch. Last but not least, if you solved exercise 45 you also know that HEIGHT and 0 are special points for the third clause of the data definition.
Clearly, (show -3) and (show -1) should produce images like the one for (show -2). After all, the rocket still rests on the ground, even if the count down numbers differ.
- The case for (show HEIGHT) is different. According to our agreement, HEIGHT represents the state when the rocket has just been launched. Pictorially this means the rocket should still rest on the ground. Based on the last test case above, here is the test case that expresses this insight:Except that if you evaluate the “expected value” expression by itself in DrRacket’s interaction area, you see that the rocket is half-way underground. This shouldn’t be the case of course, meaning we need to adjust this test case and the above:
Last but not least, you should determine the result you now expect from (show 0). It is a simple but revealing exercise.
Exercise 46: Why would it be incorrect to formulate the first condition as (string=? "resting" x)? Conversely, formulate a completely accurate condition, i.e., a Boolean expression that evaluates to true precisely when x belongs to the first subclass of LRCD. Similarly, what is a completely accurate condition for the third clause?
(define (show x) (cond [(string? x) (place-image ROCKET 10 (- HEIGHT ROCKET-CENTER) BACKG)] [(<= -3 x -1) (place-image (text (number->string x) 20 "red") 10 (* 3/4 WIDTH) (place-image ROCKET 10 (- HEIGHT ROCKET-CENTER) BACKG))] [(>= x 0) (place-image ROCKET 10 (- x ROCKET-CENTER) BACKG)]))
appears three different times in the function: twice to draw a resting rocket and once to draw a flying rocket. Define an auxiliary function that performs this work and thus shorten show. Why is this a good idea? We discussed this idea in the Prologue.
(check-expect (launch "resting" " ") -3) (check-expect (launch "resting" "a") "resting") (check-expect (launch -3 " ") -3) (check-expect (launch -1 " ") -1) (check-expect (launch 33 " ") 33) (check-expect (launch 33 "a") 33)
; LRCD -> LRCD ; raise the rocket by YDELTA if it is moving already (check-expect (fly "resting") "resting") (check-expect (fly -3) -2) (check-expect (fly -2) -1) (check-expect (fly -1) HEIGHT) (check-expect (fly 10) (- 10 YDELTA)) (check-expect (fly 22) (- 22 YDELTA)) (define (fly x) (cond [(string? x) x] [(<= -3 x -1) (if (= x -1) HEIGHT (+ x 1))] [(>= x 0) (- x YDELTA)]))
The design of fly—
Exercise 48: Define main2 so that you can launch the rocket and watch it lift off. Read up on the on-tick clause and specify that one tick is actually one second.
If you watch the entire launch, you will notice that once the rocket reaches the top, something curious happens. Explain what happens. Add a stop-when clause to main2 so that the simulation of the lift-off stops gracefully when the rocket is out of sight.
Solving the exercise demonstrates that you now have a complete, working
What the preceding three sections have clarified is that the design of functions can and must exploit the organization of the data definition. Specifically, if a data definition singles out certain pieces of data or specifies ranges of data, then the creation of examples and the organization of the function reflects these cases and ranges.
Sample Problem: The state of Tax Land has created a three-stage sales tax to cope with its budget deficit. Inexpensive items, those costing $1,000 or less, are not taxed. Luxury items, with a price of $10,000 or more, are taxed at the rate of eight (8) percent. Everything in between comes with a five (5.25) percent price tag.
Design a function for a cash register that computes the sales tax for each item. That is, design a function that, given the price of an item, computes the amount of tax to be charged.
When the problem statement distinguishes different classes of input information, you need carefully formulated data definitions.
A data definition should explicitly enumerate different subclasses of data or in some cases just individual pieces of data. Each of those subclasses represents a subclass of information. The key is that each subclass of data is distinct from each other class so that your function can distinguish the subclasses, too.Example: Our sample problem deals with prices and taxes, which are usually positive numbers. It also clearly distinguishes three ranges of positive numbers:Make sure you understand how these three ranges relate to the original problem.
As far as the signature, purpose statement, and function header are concerned, nothing changes.Example: Here is the material for our running example.
For functional examples, however, it is imperative that you pick at least one example per subclass in the data definition. Also, if a subclass is a finite range, be sure to pick examples from the boundaries of the range and from the interior.
Example: Since our data definition involves three distinct intervals, let us pick all boundary examples and one price from inside each interval and determine the amount of tax for each: 0, 537, 1000, 1282, 10000, and 12017. Before you read on, try to calculate the tax for each of these prices.Here is out first attempt:
961.36Stop for a moment and think about the table entries with question marks.
The point of these question marks is to point out that the problem statement uses the somewhat vague phrase “those costing $1,000 or less” and “$10,000 or more” to specify the tax table. While a programmer may immediately jump to the conclusion that these words mean “strictly less” or “strictly more,” a lawmaker or tax accountant may have meant to say “less or equal” or “more or equal,” respectively. Being skeptical, we decide here that Tax Land legislators always want more money to spend, so the tax rate for $1,000 is 5% and the rate for $10,000 is 8%. A programmer in a company would have to ask the tax-law specialist in the company or the state’s tax office.
Once you have figured out how the boundaries are supposed to be interpreted in the domain, you should also revise the data definition. Modify the data definition above to ensure that you can do so.Before we go, let us turn some of the examples into test cases:
(check-expect (sales-tax 537) 0) (check-expect (sales-tax 1000) (* 0.05 1000)) (check-expect (sales-tax 12017) (* 0.08 12017))Take a close look. Instead of just writing down the expected result, we write down how to compute the expected result. This makes it easier later to formulate the function definition.
The biggest novelty is the conditional template. In general,
the template mirrors the organization of subclasses with a cond.This slogan means two concrete things to you. First, the function’s body must be a conditional expression with as many clauses as there are distinct subclasses in the data definition. If the data definition mentions three distinct subclasses of input data, you need three cond clauses; if it is about 17 subclasses, the cond expression contains 17 clauses. Second, you must formulate one condition expression per cond clause. Each expression involves the function parameter and identifies one of the subclasses of data in the data definition.
The fifth step is to define the function. Given that the function body already contains a schematic cond expression, it is natural to start from the various cond lines. For each cond line, you assume that the input parameter meets the condition and you take a look at a corresponding example. To formulate the corresponding result expression, you write down the computation for this example as an expression that involves the function parameter. Ignore all other possible kinds of input data when you work on one line; the other cond clauses take care of those.
Last but not least, run the tests and make sure that the tests cover all cond clauses in the function.
Let us exploit our knowledge to create a world that simulates a traffic light. Just as a reminder, a traffic light in the US functions as follows. When the light is green and it is time to stop the traffic, the light turns yellow and after that it turns red. Conversely, when the light is red and it is time to get the traffic going, the light switches to green. There are a few other modes for a traffic light, but we ignore those here.
The figure above summarizes this description as a state transition diagram. Such a diagram consists of states and arrows that connect these states. Each state is one possible configuration. Here the states depict a traffic light in one particular state: red, yellow, or green. Each arrow shows how the world can change, from which state it can transition to which other state. Our sample diagram contains three arrows, because there are three possible ways in which the traffic light can change. Labels on the arrows indicate the reason for changes. A traffic light transitions from one state to another as time passes by.
In many cases, state transition diagrams have only a finite number of states and arrows. Computer scientists call such diagrams finite state machines or automata, short: FSA. While FSAs look simple at first glance, they play an important role in computer science.
To create a world program for an FSA, we must first pick a data
representation for the possible “states of the world,” which, according
to Designing World Programs, represents those aspects
of the world that may change in some ways as opposed to those that remain
the same. In the case of our traffic light, what changes is the color of
the light, i.e., which bulb is turned on. The size of the bulbs, their
arrangement (horizontal or vertical), and other aspects don’t
change. Since there are only three states—
Our second figure shows how to interpret the three elements of TrafficLight. Like the original figure, it consists of three states, arranged in such a way that it is trivial to interpret each data element as a representation of a concrete “world state” and vice versa. Also, the arrows are now labeled with tick suggesting that our world program uses the passing of time as the trigger that changes the state of the traffic light. Alternatively, we could use keystrokes or mouse events to switch the light, which would be especially appropriate if we wanted to a simulation of the manual operation of a traffic light.
; TrafficLight -> TrafficLight ; determine the next state of the traffic light, given current-state (define (tl-next current-state) current-state) ; TrafficLight -> Image ; render the current state of the traffic light as a image (define (tl-render current-state) (empty-scene 100 30))
Exercise 50: The goal of this exercise is to finish the design of a world program that simulates the traffic light FSA. Here is the main function:
; TrafficLight -> TrafficLight ; simulate a traffic light that changes with each tick (define (traffic-light-simulation initial-state) (big-bang initial-state [to-draw tl-render] [on-tick tl-next 1]))The function uses its argument as the initial state for the big-bang expression. It tells DrRacket to re-draw the state of the world with tl-render and to react to clock ticks with tl-next. Also note it informs the computer that the clock should tick once per second (how?).
In short, you have two design tasks to complete: tl-render and tl-design.
Hint 1: Create a DrRacket buffer that includes the data definition for TrafficLight and the function definitions of tl-next and tl-render.For the design of the latter, we include some test cases:
(check-expect (tl-render "red") ) (check-expect (tl-render "yellow") ) (check-expect (tl-render "green") )Hint 2: We started from the following graphical constants:and introduced additional constants for the diameter, the width, the height, etc. You may also find this auxiliary function helpful:
; TrafficLight TrafficLight -> Image ; render the c colored bulb of the traffic light, ; when on is the current state (define (bulb on c) (if (light=? on c) (circle RAD "solid" c) (circle RAD "outline" c)))
Hint 3: Look up the image primitive place-image, because it simplifies the task quite a bit.
Here is another finite-state problem that introduces a few additional complications:
Sample Problem: Design a world program that simulates the working of a door with an automatic door closer. If this kind of door is locked, you can unlock it with a key. An unlocked door is still closed but pushing at the door opens it. Once you have passed through the door and you let go, the automatic door closer takes over and closes the door again. When a door is closed, you can lock it again.
To tease out the essential elements, we again draw a transition diagram; see the left-hand side of the figure. Like the traffic light, the door has three distinct states: locked, closed, and open. Locking and unlocking are the activities that cause the door to transition from the locked to the closed state and vice versa. As for opening an unlocked door, we say that you need to push the door open. The remaining transition is unlike the others, because it doesn’t require any activities on your side. Instead, the door closes automatically over time. The corresponding transition arrow is labeled with *time* to emphasize this bit.
The next step of a world design demands that we translate the actions in
door-closer, which closes the door during one tick;
door-actions, which manipulates the door in response to pressing a key; and
door-render, which translates the current state of the door into an image.
(check-expect (door-closer "locked") "locked") (check-expect (door-closer "closed") "closed") (check-expect (door-closer "open") "closed")
The second function, door-actions, takes care of the remaining three arrows of the diagram. Functions that deal with keyboard events consume both a world and a key event, meaning the signature is as follows:
given key event
(check-expect (door-actions "locked" "u") "closed") (check-expect (door-actions "closed" "l") "locked") (check-expect (door-actions "closed" " ") "open") (check-expect (door-actions "open" "a") "open") (check-expect (door-actions "closed" "a") "closed") (define (door-actions s k) (cond [(and (string=? "locked" s) (string=? "u" k)) "closed"] [(and (string=? "closed" s) (string=? "l" k)) "locked"] [(and (string=? "closed" s) (string=? " " k)) "open"] [else s]))
; A DoorState is one of: ; – "locked" ; – "closed" ; – "open" ; — — — — — — — — — — — — — — — — — — — — — — — — — — ; DoorState -> DoorState ; closes an open door over the period of one tick (check-expect (door-closer "locked") "locked") (check-expect (door-closer "closed") "closed") (check-expect (door-closer "open") "closed") (define (door-closer state-of-door) (cond [(string=? "locked" state-of-door) "locked"] [(string=? "closed" state-of-door) "closed"] [(string=? "open" state-of-door) "closed"])) ; — — — — — — — — — — — — — — — — — — — — — — — — — — ; DoorState KeyEvent -> DoorState ; three key events simulate actions on the door (check-expect (door-actions "locked" "u") "closed") (check-expect (door-actions "closed" "l") "locked") (check-expect (door-actions "closed" " ") "open") (check-expect (door-actions "open" "a") "open") (check-expect (door-actions "closed" "a") "closed") (define (door-actions s k) (cond [(and (string=? "locked" s) (string=? "u" k)) "closed"] [(and (string=? "closed" s) (string=? "l" k)) "locked"] [(and (string=? "closed" s) (string=? " " k)) "open"] [else s])) ; — — — — — — — — — — — — — — — — — — — — — — — — — — ; DoorState -> Image ; the current state of the door as a large red text (check-expect (door-render "closed") (text "closed" 40 "red")) (define (door-render s) (text s 40 "red")) ; — — — — — — — — — — — — — — — — — — — — — — — — — — ; DoorState -> DoorState ; simulate a door with an automatic door closer (define (door-simulation initial-state) (big-bang initial-state (on-tick door-closer) (on-key door-actions) (to-draw door-render)))
Well, we could play some mathematical tricks that would “merge” two numbers into a single number in such a way that we could later extract them again. While these tricks are well-known to trained computer scientists, it should be clear to every budding programmer that such coding tricks obscure the true intentions behind a program. We therefore don’t play this kind of game. Suppose you want to design a world program that simulates a ball bouncing back and forth between two of the four walls. For simplicity, assume that it always moves two pixels per clock tick. If you follow the design recipe, your first focus is a data representation for all those things that change over time. A bouncing ball with constant speed still has two always-changing properties: the location of the ball and the direction of its movement. The problem is that the "universe" teachpack keeps track of only one value for you, and so the question arises how one piece of data can represent two changing quantities of information.
Here is another scenario that raises the same question. Your cell phone is mostly a few million lines of software with some plastic attached to them. Among other things, it administrates your list of contacts. Before we even consider a representation for your ever-growing list of phone numbers and friends, let us ponder the question of how to represent the information about a single contact, assuming each contact comes with a name, a phone number, and an email address. Especially in a context where you have lots and lots of contacts, it is important to “glue” together all the information that belongs to one contact; otherwise the various pieces could get scrambled by accident.
Every programming language provides some (possibly several) mechanism for combining several pieces of data into one piece and, conversely, for retrieving those values later. BSL is no exception; it offers structure type definitions as the fundamental mechanism for combining several values into one piece of data. More generally, a structure type definition introduces many different functions, including one for creating structure instances, or structures for short, and several for extracting values from instances (keys for opening the compartment of a box and retrieving its content). The chapter starts with the mechanics of defining structure types, the idea of creating instances of these structure type, and then discusses the entire universe of BSL data. After presenting a design recipe for functions that consume structures, we end the chapter with a look at the use of structures in world programs.
You may have encountered Cartesian points in your mathematics courses in school. They are closely related though their y coordinates mean something slightly different than the y coordinates of posns. A First Look: A location on a world canvas is uniquely identified by two pieces of data: the distance from the left margin and the distance from the top margin. The first is called an x-coordinate and the second one is the y-coordinate.
(check-expect (distance-to-0 (make-posn 3 4)) 5) (check-expect (distance-to-0 (make-posn 8 6)) 10) (check-expect (distance-to-0 (make-posn 5 12)) 13)
Next we can turn our attention to the definition of the function. The examples imply that the design of distance-to-0 doesn’t need to distinguish between different situations. Still, we are stuck because distance-to-0 has a single parameter that represents the entire pixel but we need the two coordinates to compute the distance. Put differently, we know how to combine two numbers into a posn structure using make-posn but we don’t know how to extract these numbers from a posn structure.
An alternative terminology is “to access the fields of a record.” We prefer to think of `structure values as containers from which we can extract other values. Of course, BSL provides operations for extracting values from structures. For posn structures, there are two such operations, one per coordinate: posn-x and posn-y. The former operation extracts the x coordinate; the latter extracts the y coordinate.
The function squares (posn-x a-posn) and (posn-y a-posn), which represent the x and y coordinates, sums up the results, and takes the square root. With DrRacket, we can also quickly check that our new function produces the proper results for our examples.
by hand. Show all steps. Assume that sqr performs its computation in a single step. Check the results with DrRacket stepper.
When placed in such a context, you cannot walk a straight path from a point to the origin; instead you must follow the grid pattern. For a point such as (3,4), a local might tell you “go three blocks this way, turn right, and then go four blocks” to give you directions to get to the origin of the grid.
Design the function manhattan-distance, which measures the Manhattan distance of the given posn structure to the origin.
Defining a Structure: Unlike numbers or Boolean values, structures such as posn usually don’t come with a programming language. Only the mechanism to define structure types is provided; the rest is left up to the programmer. This is also true for BSL.
one constructor, a function that creates structure instances from as many values as there are fields; as mentioned, structure is short for structure instance. The phrase structure type is a generic name for the collection of all possible instances.
one selector per field, which extracts the value of the field from a structure instance; and
one structure predicate, which like ordinary predicates distinguishes instances from all other kinds of values.
One curious aspect of structure type definitions is that it makes up names for the various new operations it creates. Specifically, for the name of the constructor, it prefixes the structure name with “make-” and for the names of the selectors it postfixes the structure name with the field names. Finally, the predicate is just the structure name with “?” added; we pronounce this question mark as “huh” when we read program fragments aloud.
(define-struct entry (name phone email))
the constructor make-entry, which consumes three values and creates an instance of entry;
the selectors entry-name, entry-phone, and entry-email, which all consume one value—
an instance of entry— and produces a value; and
the predicate entry?.
(make-entry "Sarah Lee" "666-7771" "[email protected]")
(make-entry "Tara Harper" "666-7770" "[email protected]")
> (entry-name pl)
> (entry-name bh)
> (entry-name (make-posn 42 5))
entry-name: expects an entry, given (posn 42 5)
> (entry-email pl)
> (entry-phone pl)
Exercise 54: Write down the names of the functions (constructors, selectors, and predicates) that the following structure type definitions define:Make sensible guesses as to what kind of values go with which fields and create at least one instance per structure type definition. Then draw box representations for each of them.
A positive number means the ball moves in one direction.
A negative number means it moves in the opposite direction.
(define-struct ball (location velocity))
Interpret this program fragment in terms of a “world scenario” and then create other instances of balld.
Objects in games and simulations don’t always move along vertical or horizontal lines. They move in some “diagonal” manner across the screen. Describing both the location and the velocity of a ball that moves across a 2-dimensional world canvas demands two numbers each: one per direction. For the location part, the two numbers represent the x and y coordinates. For the velocity part they are the changes in the x and y direction; in other words, these “change numbers” must be added to the respective coordinates if we wish to find out where the object is next.
(define-struct vel (deltax deltay))
> (ball-velocity ball1)
> (vel-deltax (ball-velocity ball1))
> (posn-x (ball-velocity ball1))
posn-x: expects a posn, given (vel ...)
(define-struct ballf (x y deltax deltay))Create an instance of ballf that is interpreted in the same way as ball1.
(define-struct centry (name home office cell)) (define-struct phone (area number)) (make-centry "Shriram Fisler" (make-phone 207 "363-2421") (make-phone 101 "776-1099") (make-phone 208 "112-9981"))
In sum, arranging information in a hierarchical manner is natural. The best way to represent such information with data is to mirror the nesting with nested structure instances. Doing so makes it easy to interpret the data in the application domain of the program and it is also straightforward to go from examples of information to data. Of course, it is really the task of data definitions to facilitates this task of going back and forth between information and data. We have ignore data definitions so far, but we are going to catch up with this in the next section.
Data in Structures: Up to this point, data definitions have been rather boring. We either used built-in collections of data to represent information (numbers, Boolean values, strings) or we specified an itemization (interval or enumeration), which just restricts an existing collection. The introduction of structures adds a bit of complexity to this seemingly simple step.
(define-struct posn (x y)) ; A Posn is a structure: (make-posn Number Number) ; interp. the number of pixels from left and from top
(define-struct entry (name phone email)) ; An Entry is a structure: (make-entry String String String) ; interp. name, 7-digit phone number, and email address of a contact
(define-struct ball (location velocity)) ; A Ball-1d is a structure: (make-ball Number Number) ; interp. 1: the position from top and the velocity ; interp. 2: the position from left and the velocity
; A Ball-2d is a structure: (make-ball Posn Vel) ; interp. 2-dimensional position with a 2-dimensional velocity (define-struct vel (deltax deltay)) ; A Vel is a structure: (make-vel Number Number) ; interp. velocity in number of pixels per clock tick for each direction
Note also that Ball-2d refers to another one of our data definition, namely, the one for Vel. While all other data definitions have thus far referred to built-in data collections (numbers, Boolean values, strings), it is perfectly acceptable and indeed common that one of your data definition refers to another. Later, when you design programs, such connection provide some guidance for the organization of programs. Of course, at this point, it should really raise the question of what data definitions really mean and this is what the next section deals with.
Next formulate a data definition for phone numbers using this structure type definition:
(define-struct phone# (area switch phone))Use numbers to describe the content of the three fields but be as precise as possible.
In the first subsection, we developed distance0, a function that consumed a structure and produced a number. To conclude this section, we look at a few more such functions before we formulate some general principles in the next section.
Sample Problem: Your team is designing a program that keeps track of the last mouse click on a 100 x 100 canvas. Together you chose Posn as the data representation for representing the x and y coordinate of the mouse click. Design a function that consumes a mouse click and a 100 x 100 scene and adds a red spot to the latter where the former occurred.
(check-expect (scene+dot (make-posn 10 20) MTS) (place-image DOT 10 20 MTS)) (check-expect (scene+dot (make-posn 88 73) BLU) (place-image DOT 88 73 BLU))
; visual constants (define MTS (empty-scene 100 100)) (define BLU (place-image (rectangle 25 25 "solid" "blue") 50 50 MTS)) (define DOT (circle 3 "solid" "red")) ; Posn Image -> Image ; adds a red spot to s at p (check-expect (scene+dot (make-posn 10 20) MTS) (place-image DOT 10 20 MTS)) (check-expect (scene+dot (make-posn 88 73) BLU) (place-image DOT 88 73 BLU)) (define (scene+dot p s) (place-image DOT (posn-x p) (posn-y p) s))
Sample Problem: Your team is designing a game program that keeps track of an object that moves across the canvas at changing speed. The chosen data representation is a structure that contains two Posns: UFO abbreviates “unidentified flying object” and in the 1950s, people called these things “flying saucers” and similar names.
(define-struct velocity (dx dy)) ; A Velocity is a structure: (make-vel Number Number) ; interp. (make-vel d e) means that the object moves d steps ; along the horizontal and e steps along the vertical per tick (define-struct ufo (loc vel)) ; A UFO is a structure: (make-ufo Posn Velocity) ; interp. (make-ufo p v) is at location p ; moving at velocity vDesign the function move1, which moves some given UFO for one tick of the clock.
; UFO -> UFO ; move the ufo u, i.e., compute its new location in one clock ; tick from now and leave the velocity as is (check-expect (ufo-move u1) u3) (check-expect (ufo-move u2) (make-ufo (make-posn 17 77) v2)) (define (ufo-move u) u)
(define (ufo-move u) (make-ufo (posn+ (ufo-loc u) (ufo-vel u)) (ufo-vel u)))
Try it out. Enter these definitions and their test cases into the definitions area of DrRacket and make sure they work. It is the first time that we made a “wish” and you need to make sure you understand how the two functions work together.
In mathematics such collections are called sets. Every language comes with a universe of data. These are the “things” that programs manipulate and how information from and about the external world is represented. This universe of data is a collection. In particular, the universe contains all pieces of data from the built-in collections.
The figure shows one way to imagine the universe of BSL. Since there is an infinite number of numbers and an infinite number of strings, the collection of all data is of course also infinite. We indicate “infinity” in the figure with “...” but a real definition would have to avoid this imprecision.
Neither programs nor individual functions in programs deal with the entire universe of data. It is the purpose of a data definition to describe parts of this universe and to name these parts so that we can refer to them concisely. Put differently, a data definition is a description of a collection of data, and the name is then usable in other data definitions and in function signatures. For the latter, the name specifies what data a function will deal with and, implicitly, which part of the universe of data it won’t deal with.
; A BS is one of: ; — "hello", ; — "world", or ; — pi.
The introduction of structure types creates an entirely new picture. When a programmer defines a structure type, the universe expands with all possible structure instances. For example, the addition of a posn structure type means that instances of posn with all possible values in the two fields appear. The second “universe bubble” in figure 20 depicts the addition of those values, showing things such as (make-posn "hello" 0) and even (make-posn (make-posn 0 1) 2). And yes, it is indeed possible to construct all these values in a BSL program.
(define-struct ball (location velocity))
The “result” of a data definition for structures is again a collection of data, i.e., those instances to be used with functions. In other words, the data definition for Posns identifies the region shaded with gray stripes in figure 21, which includes all those posns whose two fields contain numbers. At the same time, it is perfectly possible to construct an instance of posn that doesn’t satisfy this requirement, e.g., (make-posn (make-posn 1 1) "hello"), which contains a posn in the x field and a string in the y field.
As above, make sensible assumptions as to what kind of values go with which fields.
Programmers not only write data definitions, they also read them—
for a built-in collection of data (number, string, Boolean, images), choose your favorite examples;
Note: on occasion, people use qualifiers on built-in data collections, e.g., NegativeNumber, OneLetterString etc. You shouldn’t use them unless they are unambiguous, and when someone else used them but you don’t understand, you must ask to clarify the issue—
and you must formulate an improved data definition so that others don’t run into this problem.
for an enumeration, use several of the items of the enumeration;
for intervals, use the end points (if they are included) and some interior point;
for itemizations, deal with each part as suggested for the respective piece here; and
for data definitions for structures, follow the English, i.e., use the constructor and pick an example from the data collection named for each field.
Note: DrRacket recognizes many more strings as colors.
(define-struct person (fst lst male?)) ; Person is (make-person String String Boolean)
(define-struct dog (owner name age happiness)) ; Dog is (make-dog Person String PositiveInteger H)The last definition is an unusual itemization, using both built-in data and a structure type definition. The next chapter deals with this kind of data definition in depth.
The introduction of structure types reinforces that the process of creating functions has (at least) six steps, something that Designing with Itemizations already stated. It no longer suffice to rely on built-in data collections as the representation of information; it is now clear that programmers must create data definitions for each and every problem.
When a problem calls for the representation of pieces of information that belong together or that together describe a natural whole, you need a structure type definition. It should use as many fields as there are “relevant” properties. An instance of this structure type corresponds to the whole, and the values in the fields to its attributes.
As always a data definition for a structure type must introduce a name for the collection of instances that you consider legitimate. Furthermore it must describe which kind of data goes with which field. Use only names of built-in data collections or of collections specified with other data definitions.
In the end, you (and others) must be able to use the data definition to create sample structure instances. Otherwise, something is wrong with your data definition. To ensure that you can create instances, your data definitions should come with one data examples.
Nothing changes for this step. You still need a signature, a purpose statement, and a function header.
Use the examples from the first step to create functional examples. In case one of the fields is associated with intervals or enumerations, make sure to pick end points and intermediate points to create functional examples.
Example: Imagine you are designing a function that consumes a structure and assume that it combines TrafficLight with Number. Since the former collection contains just three elements, make sure to use all three in combination with numbers as input samples.
A function that consumes structures usually—
though not always— must extract the values from the various fields in the structure. To remind you of this possibility, a template for such a function should contain one selector per field. Furthermore, you may wish to write down next to each selector expression what kind of data it extracts from the given structure; you can find this information in the data definition.
Do not, however, create selector expressions if a field value is itself a structure. In general, you are better off making a wish for an auxiliary function that processes the extracted field values.
Use the selector expressions from the template when you finally define the function, but do keep in mind that you may not need (some of) them.
Test. Test as soon as the function header is written. Test until all expressions have been covered. And test again in case you make changes.
Exercise 62: Create templates for functions that consume instances of the following structure types:
Exercise 63: Design the function time->seconds, which consumes instances of the time structures developed in exercise 59 and produces the number of seconds that have passed since midnight. For example, if you are representing 12 hours, 30 minutes, and 2 seconds with one of these structures and if you then apply time->seconds to this instance, then you should get 45002.
Exercise 64: Design the function different. It consumes two (representations of) three-letter words and creates a word from the differences. For each position in the words, it uses the letter from the second word, if the two are the same; otherwise it uses the marker "*". Note: The problem statement mentions two different tasks: one concerning words and one concerning letters. This suggests that you design two functions.
When a world program must track two different and independent pieces of information, the collection of world state data should be a collection of structures. One field is used to keep track of one piece of information, and the other field is related to the second piece of information. Naturally, if the domain world contains more than two independent pieces of information, the structures must have as many fields as there are distinct pieces of information per world state.
(define-struct space-game (ufo tank))
Every time we say piece of information, we don’t mean a single number or a single word (or phrase). You must realize that a piece of information may itself combine several pieces of information. If so, creating a data representation for a world in which each state consists of two independent pieces of information obviously leads to nested structures.
Understanding when you should use what kind of data representation for world program takes practice. To this end, the following two sections introduce several reasonably complex problem statements. We recommend you solve those before you move on to games that you might like to design.
One part of “programming” is to create a text document. You type on your
keyboard and text appears in DrRacket. You press the left arrow on your
keyboard, and the cursor moves to the left. Whenever you press the
backspace (or delete) key, the single letter to the left of the cursor
This process is called “editing” though its precise name should be “text editing of programs” because we will use “editing” for a more demanding task than typing on a keyboard. When you text-edit other kind of documents, say, your English assignment, you are likely to use other software applications, though computer scientists dub all of them “editor” or even “graphical editor.”
You are now in a position to design a world program that acts as a one-line editor for plain text. Editing here includes entering letters and somehow changing the already existing text. Changing includes the deletion and the insertion of letters, which in turn, implies some notion of “position” within the text. People call this position a cursor, and most graphical editors display the cursor in such a way that you can easily spot it.
the text entered so far
the current location of the cursor.
(define-struct editor (pre post)) ; Editor = (make-editor String String) ; interp. (make-editor s t) means the text in the editor is ; (string-append s t) with the cursor displayed between s and t
Exercise 65: Design the function render, which consumes an Editor and produces an image.
The canvas for your editor program should be rendered within an empty scene of 200 x 20 pixels. For the cursor, use a 1 x 20 red rectangle. Use black text of size 11 for rendering the text.
Exercise 66: Design the function edit. It consumes two inputs: an editor e and a KeyEvent k. The function’s task is to add all single-character KeyEvent k to the end of the pre field of e, unless it denotes the backspace ("\b") key. In that case, it should delete the character immediately to the left of the cursor (if there are any). Also, the tab "\t" and rubout "\u007F" keys should be ignored. This covers all single-letter key events.
The function pays attention to only two KeyEvents that are longer than one letter: "left" and "right". The former moves the cursor one character to the left (if any), and the latter moves it one character to the right (if any). All other such KeyEvents are ignored.
Note: Develop a solid number of examples for edit, paying attention to special cases. When we developed this exercise, we developed 20 examples, and we turned all of them into tests.
Hint: Think of this function as a function that consumes an enumeration (KeyEvent) and uses auxiliary functions that then deal with the Editor structure. Keep a wish list handy; you will need to design additional functions for most of these auxiliary functions, such as string-first, string-rest, string-last, string-remove-last, etc. If you haven’t done so, solve the exercises in Functions.
Modify your function edit from exercise 66 so that it ignores a keystroke if adding it to the end of the pre field would mean the rendered text is too wide for your canvas.
Follow the design recipe.
Note: The exercise is a first study of making design choices. It shows that the very first design choice concerns the data representation. Making the right choice requires planning ahead and weighing the complexity of each. Of course, getting good at this is a question of gathering experience.
And again, if you haven’t done so, solve the exercises in Functions.
In this section we continue our virtual zoo project from Virtual Pet Worlds. Specifically, the goal of the exercise is to combine the cat world program with the program for managing its happiness gauge. When the combined program runs, you see the cat walking across the canvas and, with each step, its happiness goes down. The only way to make the cat happy is to feed it (down arrow) or to pet it (up arrow). Finally, the goal of the last exercise is create another virtual, happy pet.
Exercise 70: Define a structure type that keeps track of the cat’s x coordinate and its happiness. Then formulate a data definition for cats, dubbed VCat, including an interpretation with respect to a combined world.
Exercise 71: Design a “happy cat” world program that presents an animated cat and that manages and displays its happiness level. The program must (1) use the structure type from the preceding exercise and (2) reuse the functions from the world programs in Virtual Pet Worlds.
Exercise 73: Extend your structure type definition and data definition to include a direction field. Adjust your program so that the cat moves in the specified direction. The program should move the cat in the current direction, and it should turn the cat around when it reaches either end of the scene.
(define cham )
This drawing of the chameleon is a transparent image. To insert it into DrRacket, insert it with the “Insert Image” menu item. Using this instruction preserves the transparency of the drawing’s pixels.
When a partly transparent image is combined with a colored shape, say a rectangle, the image takes on the underlying color. In the chameleon drawing, it is actually the inside of the animal that is transparent; the area outside is solid white. Try out this expression in your DrRacket, using the "2hdtp/image" teachpack:
(overlay cham (rectangle (image-width cham) (image-height cham) "solid" "red"))
Exercise 74: Design a world program that has the chameleon continuously walking across the screen, from left to right. When it reaches the right end of the screen, it disappears and immediately reappears on the left. Like the cat, the chameleon gets hungry from all the walking and, as time passes by, this hunger expresses itself as unhappiness.
For managing the chameleon’s happiness gauge, you may reuse the happiness gauge from the virtual cat. To make the chameleon happy, you feed it (down arrow, two points only); petting isn’t allowed. Of course, like all chameleon’s, ours can change color, too: "r" turns it red, "b" blue, and "g" green. Add the chameleon world program to the virtual cat game and reuse functions from the latter when possible.
In the preceding two chapters, you have encountered two new kinds of data definitions. Those that employ itemization (enumeration, intervals) are used to create small collections from large ones. Those for structures combine several collections. Since this book keeps reiterating that the development of data representations is the starting point for proper program design, it shouldn’t surprise you to find out that on many occasions programmers want to itemize data definitions that involve structures.
the state of the world is a structure with two fields, or
the state of the world is a structure with three fields.
This chapter introduces the basic idea of itemizing data definitions that involve structures. We start straight with the design of functions on this kind of data, because we have all the other ingredients we need. After that, we discuss some examples, including world programs that benefit from our new power. The last section is about errors in programming.
Let us start with a refined problem statement for our space invader game from Programming with Structures.
Sample Problem: Design a game program using the "universe" teachpack for playing a simple space invader game. The player is in control of a “tank” (small rectangle) that must defend our planet (the bottom of the canvas) from a UFO (“flying saucer”) that descends from the top of the canvas to the bottom. In order to stop the UFO from landing, the player may fire one missile (a triangle smaller than the “tank”). To fire the missile, the player hits the space bar and, in response, the missile emerges from the tank. If the UFO collides with the missile, the player wins; otherwise the UFO lands and the player loses.
Here are some details concerning the movement of the three game objects. First, the tank moves a constant velocity along the bottom of the canvas. The player may use the left arrow key and the right arrow key to change directions. Second, the UFO descends at a constant speed but makes small random jumps to the left or right. Third, once fired the missile ascends along a straight vertical line at a constant speed at least twice as fast as the UFO descends. Finally, the UFO and the missile “collide” if their reference points are close enough—
for whatever you think “close enough” should mean.
The following two subsections use this sample problem as a running example, so study it well and solve the following exercise before you continue. Doing so should help you understand the problem in enough depth.
Exercise 75: Draw some sketches of what the game scenery looks like at various stages. Use the sketches to determine the constant and the variable pieces of the game. For the former, develop “physical” constants that describe the dimensions of the world (canvas) and its objects plus graphical constants that are useful for rendering these objects. Also develop graphical constants for the tank, the UFO, the missile, and some background scenery.
Defining Itemizations: The first step in our design recipe calls for the development of data definitions. One purpose of a data definition is to describe the construction of data that represent the state of the world; another is to describe all possible pieces of data that the functions of the world program may consume. Since we haven’t seen itemizations that include structures, this first subsection introduces the basic idea via example. While this is straightforward and shouldn’t surprise you, you should still pay close attention.
; A UFO is Posn. ; interp. (make-posn x y) is the UFO's current location (define-struct tank (loc vel)) ; A Tank is (make-tank Number Number). ; interp. (make-tank x dx) means the tank is at (x ,HEIGHT) ; and that it moves dx pixels per clock tick ; A Missile is Posn. ; interp. (make-posn x y) is the missile's current location
- Here is an instance that describes the tank maneuvering into position to fire the missile:
(make-aim (make-posn 20 10) (make-tank 28 -3))
- This one is just like the previous one but the missile has been fired:Of course the capitalized names refer to the physical constants that you defined.
- Finally, here is one where the missile is close enough to the UFO for a collision:This example assumes that the canvas is more than 100 pixels tall.
The Design Recipe: With a new way of formulating data definitions comes an inspection of the design recipe. This chapter introduces a way to combine two means of describing data, and the revised design recipe reflects this, especially the first step:
When do you need this new way of defining data? We already know that the need for itemizations is due to the distinctions among different classes of information in the problem statement. Similarly, the need for structure-based data definitions is due to the demand to group several different pieces of information.
An itemization of different forms of data—
including collections of structures— is required when your problem statement distinguishes different kinds of information and when at least some of these pieces of information consist of several different pieces.
One thing to keep in mind is that you can split data definitions. That is, if a particular clause in your data definition looks overly complex, you may wish to write down a separate data definition for this clause and then just refer to this auxiliary definition via the name that it introduces.
Last but not least, be sure to formulate data examples for your data definitions.
As always, a function signature should only mention the names of data collections that you have defined or that are built-in. This doesn’t change and neither do the requirement to express a concise purpose statement or to add a function header that always produces a default value.
Nothing changes for the third step. You still need to formulate functional examples that illustrate the purpose statement from the second step, and you still need one example per item in the itemization of the data definition.
The development of the template now exploits two different dimensions: the itemization itself and the use of structures in some of its clauses.
By the first, the body of the template should consist of a cond expression that has as many cond clauses as the itemizations has items. Furthermore, you must add a condition to each cond clause that identifies the subclass of data in the corresponding item.
By the second, if an item deals with a structure, the template should contain the selector expressions—
in the cond clause that deals with the subclass of data described in the item.
When, however, you choose to describe the data with a separate data definition, then you do not add selector expressions. Instead, you develop a separate template for the separate data definition and indicate with a function call to this separate template that this subclass of data is processed separately.
Before you go through the work of writing down a complex template like this kind, you should briefly reflect on the nature of the function. If the problem statement suggests that there are several tasks to be performed, it is likely you need to write down a function composition in place of a template. Since at least one of these auxiliary functions is likely to deal with the given class of input data, you need to develop the template later.
Fill the gaps in the template. It is easy to say, but the more complex we make our data definitions, the more complex this step becomes. The good news is that the design recipe is about helping you along, and there are many ways in which it can do so.
If you are stuck, fill the easy cases first and use default values for the others. While this makes some of the test cases fail, you are making progress and you can visualize this progress.
If you are stuck on some cases of the itemization, analyze the examples that correspond to those cases. Analyze them. See what the pieces of the template compute from the given inputs. Then consider how you can combine these pieces—
possibly with some global constants— to compute the desired output. Consider using an auxiliary function.
Test. If tests fail, go back to the previous step.
Go back to From Functions to Programs, re-read the description of the simple design recipe, and compare it to the above. Also before you read on, try to solve the following exercise.
; Location is one of: ; – Posn ; – Number ; interp. Posn are positions on the Cartesian grid, ; Numbers are positions on the number lineDesign the function in-reach, which determines whether or not a given location’s distance to the origin is strictly less than some constant R.
Note: This function has no connection to any other material in this chapter.
Examples and Exercises: Let us illustrate the design recipe with the design of a rendering function for our space invader game. Recall that a big-bang expression needs such a rendering function to turn the state of the world into an image after every clock tick, mouse click, or key stroke.
We used a triangle that isn’t available in BSL graphical library. No big deal. Since the itemization in the data definition consists of two items, let us make three examples, using the data examples from above: see figure 22. (Unlike the function tables you find in mathematics books, this table is rendered vertically. The left column are sample inputs for our desired function, the right column lists the corresponding desired results. As you can see, we just used the data examples from the first step of the design recipe, and they cover both items of the itemization.
The template contains nearly everything we need to complete our task. In order to complete the definition, we need to figure out for each cond line how to combine the values we have into the expected result. Beyond the pieces of the input, we may also use globally defined constants, e.g., BACKGROUND, which is obviously of help here; primitive or built-in operations; and, if all else fails, wish-list functions, i.e., we describe functions that we wish we had.
the same as the result ofWhat do you call this property in your mathematics courses? Can you think of other possibilities?
Exercise 80: Design the function si-game-over? for use as the stop-when handler. The game should stop if the UFO has landed or if the missile has hit the UFO. For both conditions, we recommend that you check for proximity of one object to another.
Exercise 81: Design the function si-move, which is called for every clock tick. Accordingly it consumes an element of SIGS and produces another one. Its purposes is to move all objects according to their velocity.Don’t be afraid of “magic” here. It just means that you can’t design such a “function” yet. And yes, random isn’t really a mathematical function. For the random moves of the UFO, use the BSL function random:
To test functions that employ random, create a main function that calls an auxiliary function on just the random numbers. That way you can still test the auxiliary function where all the real computation happens.
Exercise 82: Design the function si-control, which plays the role of the key event handler. As such it consumes a game state and a KeyEvent and produces a new game state. This specific function should react to three different key events:
pressing the left arrow ensures that the tank moves left;
pressing the right arrow ensures that the tank moves right; and
pressing the space bar fires the missile if it hasn’t been launched yet.Enjoy the game.
Data representations are rarely unique. For example, we could use a single
structure type to represent the states of a space invader game— | http://www.ccs.neu.edu/home/matthias/HtDP2e/part_one.html | 13 |
62 | These are examples of using GSP to present lessons using "show/hide" buttons. Each is based on the exploration of a problem.
GSP Lesson 1.
Given two lines and a point "between" them. Construct all circles through the point and tangent to each of the two lines. The case of intersecting lines is shown here.
GSP Lesson 2.
Find the shortest path between two points on opposides of the river when crossing the river must be done on a path perpendicular to the banks.
GSP Lesson 3.
Given two circles of different radius that intersect. If E is one point of intersection, construct a line through E that cuts off chords of equal length in the two circles.
GSP Lesson 4a.
What is the locus of the midpoint of a line segment of varying length where one end is fixed and the other end moves around a circle?
GSP Lesson 4b.
What is the locus of the midpoint of a line segment of varying length where one end is fixed and the other end moves around a triangle? Generalize to movement around any closed path.
GSP Lesson 4c.
What is the locus of the midpoint of a line segment of varying length where each end of the segment moves around a circle?
GSP Lesson 5.
Given two points A and B on the same side of a line k. If C is a point on K, construct the location of C so that AC + CB is a minimum.
GSP Lesson 6.
If the base and area of a triangle are fixed, find the triangle with minimal perimeter.
GSP Lesson 7a.
Take any parallelogram and construct squares externally on each side. Prove that the centers of the four sqares are the vertices of a square. Show that the area of this square is always greater than or equal to twice the area of the parallelogram. When is it twice the area?
GSP Lesson 7b.
Take any parallelogram and construct squares toward the inside of the parallelogram on each side. Prove that the centers of the four sqares are the vertices of a square. Is there a relationship of the area of this square to the area of the parallelogram.
GSP Lesson 8.
Given three line segments that are the lengths of a point E from the vetrices A, B, and C or an equilateral triangle. Construct triangle ABC. What if E was a point outside the triangle?
GSP Lesson 9.
Construct a triangle of minimal perimeter inscribed in a given acute triangle.
GSP Lesson 10.
In an equilateral triangle ABC, let D be the mid-point of AB and E be the mid-point of AC. Extend DE to intersect the circumcircle at point P. Determine the ratio PC/PA. Determine the ratio DE/DP.
GSP Lesson 11.
Construct a circle with center O having perpendicular diameters AB and DC. Take the midpoint M of OC and constuct an arc with center at M through A. The arc intersects OD at N. Investigate ON/DN.
Show that AN is the length of the side of a regular pentagon inscribed in the circle with radius OA (i.e., construct the inscribed pentagon, . . . and investigate).
GSP Lesson 12. (Fixed Angle) What is (construct) the locus of the vertex of a fixed angle that is moved such that its sides always subtend a fixed segment AB. That is, given an angle of a specific measure, place the angle so that its two sides always touch points A and B of the segment
GSP Lesson 13Rectangle circumscribed about an ellipse.Open first GSP file. Open Second GSP File.
Run the animations in these files and explore. They should suggest at least the following theorem:
Prove that the vertices of a rectangle circumscribed about an ellipse will lie on a circle. Determine its center and radius with respect to the ellipse.
GSP Lesson 14.
Problem: For a circle with diameter FB construct a tangent at G. Select points A and B on the circle and construct the tangents to the circle at A and B. Let P be the intersection of these latter two tangents. Construct rays FA, FP, and FB with respective intersections C, D, and E with the tangent at G. Show that CD = DE. What restrictions must be placed on the locations of A and B?
GSP Lesson 15. Given a triangle ABC with its circumcircle. Construct a circle tangent to AB, AC and the circumcircle. (There may be two of them.) | http://jwilson.coe.uga.edu/gsp.lesson.folder/gsp.lesson.html | 13 |
84 | The History of Pi
History of Mathematics
Rutgers, Spring 2000
Throughout the history of mathematics, one of the most enduring challenges has been the calculation of the ratio between a circle's circumference and diameter, which has come to be known by the Greek letter pi. From ancient Babylonia to the Middle Ages in Europe to the present day of supercomputers, mathematicians have been striving to calculate the mysterious number. They have searched for exact fractions, formulas, and, more recently, patterns in the long string of numbers starting with 3.14159 2653..., which is generally shortened to 3.14. William L. Schaaf once said, "Probably no symbol in mathematics has evoked as much mystery, romanticism, misconception and human interest as the number pi" (Blatner, 1). We will probably never know who first discovered that the ratio between a circle's circumference and diameter is constant, nor will we ever know who first tried to calculate this ratio. The people who initiated the hunt for pi were the Babylonians and Egyptians, nearly 4000 years ago. It is not clear how they found their approximation for pi, but one source (Beckman) makes the claim that they simply made a big circle, and then measured the circumference and diameter with a piece of rope. They used this method to find that pi was slightly greater than 3, and came up with the value 3 1/8 or 3.125 (Beckmann, 11). However, this theory is probably a fantasy based on a misinterpretation of the Greek word "Harpedonaptae," which Democritus once mentioned in a letter to a colleague. The word literally means "rope-stretchers" or "rope-fasteners." The misinterpretation is that these men were stretching ropes in order to calculate circles, while they were actually making measurements in order to mark the property limits and areas for temples, according to (Heath, 121).
A famous Egyptian piece of papyrus gives us another ancient estimation for pi. Dated around 1650 BC, the Rhind Papyrus was written by a scribe named Ahmes. Ahmes wrote, "Cut off 1/9 of a diameter and construct a square upon the remainder; this has the same area as the circle" (Blatner, 8). In other words, he implied that pi = 4(8/9)2 = 3.16049, which is also fairly accurate. Word of this did not spread to the East, however, as the Chinese used the inaccurate value pi = 3 hundreds of years later.
Chronologically, the next approximation of pi is found in the Old Testament. A fairly well known verse, 1 Kings 7:23, says: "Also he made a molten sea of ten cubits from brim to brim, round in compass, and five cubits the height thereof; and a line of thirty cubits did compass it round about" (Blatner, 13). This implies that pi = 3. Debates have raged on for centuries about this verse. According to some it was just a simple approximation, while others say that "... the diameter perhaps was measured from outside, while the circumference was measured from inside" (Tsaban, 76). However, most mathematicians and scientists neglect a far more accurate approximation for pi that lies deep within the mathematical "code" of the Hebrew language. In Hebrew, each letter equals a certain number, and a word's "value" is equal to the sum of its letters. Interestingly enough, in 1 Kings 7:23, the word "line" is written Kuf Vov Heh, but the Heh does not need to be there, and is not pronounced. With the extra letter , the word has a value of 111, but without it, the value is 106. (Kuf=100, Vov=6, Heh=5). The ratio of pi to 3 is very close to the ratio of 111 to 106. In other words, pi/3 = 111/106 approximately; solving for pi, we find pi = 3.1415094... (Tsaban, 78). This figure is far more accurate than any other value that had been calculated up to that point, and would hold the record for the greatest number of correct digits for several hundred years afterwards. Unfortunately, this little mathematical gem is practically a secret, as compared to the better known pi = 3 approximation.
When the Greeks took up the problem, they took two revolutionary steps to find pi. Antiphon and Bryson of Heraclea came up with the innovative idea of inscribing a polygon inside a circle, finding its area, and doubling the sides over and over . "Sooner or later (they figured), ...[there would be] so many sides that the polygon ...[would] be a circle" (Blatner, 16). Later, Bryson also calculated the area of polygons circumscribing the circle. This was most likely the first time that a mathem atical result was determined through the use of upper and lower bounds. Unfortunately, the work boiled down to finding the areas of hundreds of tiny triangles, which was very complicated, so their work only resulted in a few digits. (Blatner, 16) At ap proximately the same time, Anaxagoras of Clazomenae started working on a problem that would not be conclusively solved for over 2000 years. After imprisonment for unlawful preaching, Anaxagoras passed his time attempting to square the circle. Cajori wri tes: "This is the first time, in the history of mathematics, that we find mention of the famous problem of the quadrature of the circle, the rock that upon which so many reputations have been destroyed.... Anaxagoras did not offer any solution of it, and seems to have luckily escaped paralogisms" (Cajori 17). Since that time, dozens of mathematicians would rack their brains trying to find a way to draw a square with equal area to a given circle; some would maintain that they had found methods to solve the problem, while others would argue that it was impossible. The problem was finally laid to rest in the nineteenth century.
The first man to really make an impact in the calculation of pi was the Greek, Archimedes of Syracuse. Where Antiphon and Bryson left off with their inscribed and circumscribed polygons, Archimedes took up the challenge. However, he used a slightly dif ferent method than they used. Archimedes focused on the polygons' perimeters as opposed to their areas, so that he approximated the circle's circumference instead of the area. He started with an inscribed and a circumscribed hexagon, then doubled the si des four times to finish with two 96-sided polygons. (Archimedes, 92) His method was as follows...
Given a circle with radius, r = 1, circumscribe a regular polygon A with K = 3(2n-1 sides and semiperimeter an and inscribe a regular polygon B with K = 3(2n-1 sides and semiperimeter bn. This results in a decreasing sequence a1, a2, a3... and an increa sing sequence b1, b2, b3... with each sequence approaching pi. We can use trigonometric notation (which Archimedes did not have) to find the two semiperimeters, which are: an = K tan ((/K) and bn = K sin ((/K). Also: an+1 = 2K tan ((/2K) and bn+1 = 2K si n ((/2K). Archimedes began with a1 = 3 tan ((/3) = 3(3 and b1 = 3 sin ((/3) = 3(3/2 and used 265/153 < (3 < 1351/780. He calculated up to a6 and b6 and finally reached the conclusion that 3 10/71 < b6 < pi < a6 < 3 1/7. Archimedes ended with a 96-sided polygon, and numerous delicate calculations. (Archimedes, 95) The fact that he was able to go that far and derive such a good estimation of pi is a "stupendous feat both of imagination and calculation" (O'Connor, 2).
For the next few hundred years, no significant breakthroughs were made in the search for pi. Gradually, "the lead... passed from Europe to the East" (O'Connor, 3) in the next several centuries. The earliest value of pi used in China was 3. In 263 AD, L iu Hui independently discovered the method used by Bryson and Antiphon, and calculated the perimeters of regular inscribed polygons from 12 up to 192 sides, and arrived at the value pi = 3.14159, which is absolutely correct as far as the first five digits go. Near the end of the 5th century, Tsu Ch'ung-chih and his son Tsu Keng-chih came up with astonishing results, when they calculated 3.1415926 < pi < 3.1415927. The father and son duo used inscribed polygons with as many as 24,576 sides. (Blatner, 25) Soon after, the Hindu mathematician Aryabhata gave the 'accurate' value 62,832/20,000 = 3.1416 (as opposed to Archimedes' 'inaccurate' 22/7 which was frequently used), but he apparently never used it, nor did anyone else for several centuries. (Beckmann, 24) Another Indian mathematician, Brahmagupta, took a novel approach. He calculated the perimeters of inscribed polygons with 12, 24, 48, and 96 sides as (9.65, (9.81, (9.86, and (9.87 respectively. "And then, armed with this information, he made the l eap of faith that as the polygons approached the circle, the perimeter, and therefore pi, would approach the square root of 10 [=3.162...]. He was, of course, quite wrong" (Blatner, 26). Although this is not as accurate as other values that had already been calculated, it gained quite a bit of popularity as an approximation for pi for at least a few hundred years. "Maybe because the square root of 10 is so easy to convey and remember, this was the value that... spread from India to Europe and was used by mathematicians... throughout the Middle Ages" (Blatner, 26). By the 9th century, mathematics and science prospered in the Arab cultures. It is unclear whether the Arabian mathematician, Mohammed ibn Musa al'Khwarizmi, attempted to calculate pi, but it is clear which values he used. He used the approximations 3 1/ 7, the square root of 10, and 62,832/20,000. Strangely, though, the last and most accurate value was seemingly forgotten by the Arabs and replaced by less accurate values. (Cajori, 104)
After this, little progress was made until a pi explosion in the end of the 16th century. Françle;ois Viéte, a French lawyer and amateur (but great) mathematician, used Archimedes' method, starting with two hexagons and doubling the number of sides sixteen times, to finish with 393,216 sides. His final result was that 3.1415926535 < pi < 3.1415926537. More importantly, though, Viéte became the first man in history to describe pi using an infinite product. His formula was: 2/pi = ((1/2)(((1/2 + 1/2 ((1/2))(((1/2 + 1/2((1/2 + 1/2(1/2))pi.... Unfortunately, this equation is not too useful in calculating ( because it requires too many iterations before convergence, and the square roots become quite complicated. He did not even use his own formula in his calculation of pi. (Beckmann, 92) Still, it was an innovative discovery that would open many doors in the future. In 1593, Adrianus Romanus used a circumscribed polygon with 230 sides to compute pi to 17 digits after the decimal, of which 15 were correct. (O'Connor, 3) Just three years later, a German named Ludolph Van Ceulen presented 20 digits, using the Archimede an method with polygons with over 500 million sides. Van Ceulen spent a great part of his life hunting for pi, and by the time he died in 1610, he had accurately found 35 digits. His accomplishments were considered so extraordinary that the digits were cut into his tombstone in St. Peter's Churchyard in Leyden. Still today, Germans refer to pi as the Ludolphian Number to honor the man who had such great perseverance. (Cajori, 143) It should be noted that up to this point, there was no symbol to signify the ratio of a circle's circumference to its diameter. This changed in 1647 when William Oughtred published Clavis Mathematicae and used (/( to denote the ratio. It was not immediately embraced, until 1737, when Leonhard Euler began using the symbol pi; then it was quickly accepted. (Cajori, 158) In 1650, John Wallis used a very complicated method to find another formula for pi. Basically, he approximated the area of a quarter circle using infinitely small rectangles, and arrived at the formula 4/pi = (3(3(5(5(7(7(9...)/(2(4(4(6(6(8(8...) which is usually simplified to pi/2 = (2(2(4(4(6(6(8(8...)/(1(3(3(5(5(7(7(9...). One source describes his method as "extremely difficult and complicated" (Berggren, 292) while another source says it is "remarkable" (Cajori, 186). Wallis showed his formula to Lor d Brouncker, the president of the Royal Society, who turned it into a continued fraction: pi = 4/(1 + 1/(2 + 9/(2 + 25/(2 + 49/(2 +...))))). (Cajori, 188)
In 1672, James Gregory wrote about a formula that can be used to calculate the angle given the tangent for angles up to 45pi. The formula is: arctan (t) = t - t3/3 + t5/5 -t7/7 + t9/9.... Ten years later, Gottfried Leibniz pointed out that since tan ((/ 4) = 1, the formula could be used to find pi. (Berggren, 92) Thus, one of the most famous formulas for calculating pi was realized: (/4 = 1 - 1/3 + 1/5 - 1/7 + 1/9.... This elegant formula is one of the simplest ever discovered to calculate pi, but it is also fairly useless; 300 terms of the series are required to get only 2 decimal places, and 10,000 terms are required for 4 decimal places. (O'Connor, 3) To compute 100 digits, "you would have to calculate more terms than there are particles in the univ erse" (Blatner, 42). However, this formula set the stage for a handful of other formulas that would be more effective. For example, using the knowledge that arctan (1/(3) = (/6, you can derive the following equation: arctan (1/(3) = (/6 = 1/(3 - 1/(3(3( 3) + 1/(9(3(5) - .... After some algebra, it simplifies to: (/6 = (1/(3)(1 - 1/(3(3) + 1/(5(32) - 1/(7(33) + 1/(9(34) -.... (O'Connor, 4) Using only six terms of this formula, one can calculate pi = 3.141309, which isn't too far from the real value. Sur ely, the 17th-century mathematicians were onto something. It was just a matter of time until they discovered a formula that was even better.
The world didn't have to wait too long, after all, before another formula was discovered. In 1706, John Machin, a professor of astronomy in London, armed with the knowledge that arctan x + arctan y = arctan (x+y)/(1-xy), discovered the wonderful formula : pi/4 = 4 arctan (1/5) - arctan (1/239) = 4(1/5 - 1/(3(53) + 1/(5(55) - ...) - (1/239 - 1/(3(2393) + 1/(5(2395) - ...). The reason that this formula is such an improvement over the previous one is that the number 239 is so large that we do not need very many terms of arctan (1/239) before it converges. The other term, arctan (1/5) involves easy computations when computing terms by hand, since it involves finding reciprocals of powers of 5. (Blatner, 43) In fact, Machin took the initiative to calculate p i with his new formula, and computed 100 places by hand. (Cajori, 206) Over the next 150 years, several men used the exact same formula to find more and more digits. In 1873, an Englishman named William Shanks used the formula to calculate 707 places of pi. Many years later, it was discovered that somewhere along the line, Shanks had omitted two terms, with the result that only the first 527 digits were correct. (Berggren, 627)
"By 1750, the number pi had been expressed by infinite series,... its value had been computed [to over 100 digits]... and it had been given its present symbol. All these efforts, however, had not contributed to the solution of the ancient problem of the quadrature of the circle" (Struik, 369). The first step was taken by the Swiss mathematician Johann Heinrich Lambert when he proved the irrationality of pi first in 1761 and then in more detail in 1767. (Struik, 369) His argument was, in its simplest for m, that if x is a rational number, then tan x cannot be rational; since tan pi/4 = 1, pi/4 cannot be rational, and therefore pi is irrational. (Cajori, 246) Some people felt that his proof was not rigorous enough, but in 1794, Adrien Marie Legendre gave ano ther proof that satisfied everyone. Furthermore, Legendre also gave the first proof that (2 is irrational. (Berggren, 297)
For the next hundred years, no major events occurred in the pursuit of pi. More and more digits were computed, but there were no earth-shattering breakthroughs. In 1882, Ferdinand von Lindemann proved the transcendence of pi. (Berggren, 407) Since this means that pi is not a solution of any algebraic equation, it lay to rest the uncertainty about squaring the circle. Finally, after literally thousands and thousands of lifetimes of mental toil and strain, mathematicians finally had an absolute answer that the circle could not be squared. Nonetheless, there are still some amateur mathematicians today who do not understand the significance of this result, and futilely look for techniques to square the circle.
In the twentieth century, computers took over the reigns of calculation, and this allowed mathematicians to exceed their previous records to get to previously incomprehensible results. In 1945, D. F. Ferguson discovered the error in William Shanks' calc ulation from the 528th digit onward. Two years later, Ferguson presented his results after an entire year of calculations, which resulted in 808 digits of pi. (Berggren, 406) One and a half years later, Levi Smith and John Wrench hit the 1000-digit-mark . (Berggren, 685) Finally, in 1949, another breakthrough emerged, but it was not mathematical in nature; it was the speed with which the calculations could be done. The ENIAC (Electronic Numerical Integrator and Computer) was finally completed and funct ional, and a group of mathematicians fed in punch cards and let the gigantic machine calculate 2037 digits in just seventy hours. (Beckmann, 180) Whereas it took Shanks several years to come up with his 707 digits, and Ferguson needed about one year to g et 808 digits, the ENIAC computed over 2000 digits in less than three days!
"With the advent of the electronic computer, there was no stopping the pi busters" (Blatner, 51). John Wrench and Daniel Shanks found 100,000 digits in 1961, and the one-million-mark was surpassed in 1973. In 1976, Eugene Salamin discovered an algorith m that doubles the number of accurate digits with each iteration, as opposed to previous formulas that only added a handful of digits per calculation. (Blatner, 52) Since the discovery of that algorithm, the digits of pi have been rolling in with no end in sight. Over the past twenty years, six men in particular, including two sets of brothers, have led the race: Yoshiaki Tamura, Dr. Yasumasa Kanada, Jonathan and Peter Borwein, and David and Gregory Chudnovsky. Kanada and Tamura worked together on many pi projects, and led the way throughout the 1980s, until the Chudnovskys broke the one-billion-barrier in August 1989. In 1997, Kanada and Takahashi calculated 51.5 billion (3(234) digits in just over 29 hours, at an average rate of nearly 500,000 digits per second! The current record, set in 1999 by Kanada and Takahashi, is 68,719,470,000 digits. (Blatner, 59) There is no knowing where or when the search for pi will end. Certainly, the continued calculations are unnecessary. Just thirty-nine decimal places would be enough to compute the circumference of a circle surrounding the known universe to within the ra dius of a hydrogen atom. (Berggren, 656) Surely, there is no conceivable need for billions of digits. At the present time, the only tangible application for all those digits is to test computers and computer chips for bugs. But digits aren't really wha t mathematicians are looking for anymore. As the Chudnovsky brothers once said: "We are looking for the appearance of some rules that will distinguish the digits of pi from other numbers. If you see a Russian sentence that extends for a whole page, with hardly a comma, it is definitely Tolstoy. If someone gave you a million digits from somewhere in pi, could you tell it was from pi? We don't really look for patterns; we look for rules" (Blatner, 68). Unfortunately, the Chudnovskys have also said that no other calculated number comes closer to a random sequence of digits. Who knows what the future will hold for the almost magical number pi?
This section needs to be reformatted for HTML | http://www.math.rutgers.edu/~cherlin/History/Papers2000/wilson.html | 13 |
85 | - Discuss with students the meaning of the terms radius, diameter, and circumference of a circle. The radius of a circle is a line segment from the center of a circle to a point on the circle. The distance around a circle, is called the circumference. It is like the perimeter of a polygon. A diameter of a circle is a segment that passes through the center of the circle and connects two points of the circle.
- Determine the relationship between the radius and diameter of the same circle. In partners, the students will trace each circular object on a piece of paper. They will draw one diameter and one radius for each traced circle. To do this they will have to estimate where the center lies. They can do this by folding the paper so that the circle is reflected on itself. This will give them one diameter. If they repeat the process, the point of intersection of the two folds will be the center. They will then measure the radius and diameter of each circle and record those measurements on a chart that has a space for Length of Radius, Length of Diameter and Circumference. Partners will then make a conjecture about the relationship between the radius and the diameter. Students as partners will share their results and note that d = 2r where d is the length of the diameter and r is the length of the radius.
- In partners, the students measure the circumference of each of the circular objects by first determining the length of string it takes to go around the object and then measuring the string with a ruler and/or yardstick. The circumference measure is recorded on the same chart that has a space for Length of Radius, Length of Diameter, Circumference and for each object. They will use calculators to determine the ratio of the circumference to the length of the diameter and record that ratio on the same chart for each object. Partners will then make a conjecture about the relationship between the length of the diameter of a circle and its circumference. Students as partners will share their results and note that the circumference is a little more than 3 times the length of the diameter. Discuss with students that this relationship was the same no matter how large or small the circle and discuss why this is true. [As the circumference increases, the diameter increases for a given circle. The ratio remains constant.] Discuss the symbol assigned to the ratio of the circumference to the length of the diameter for any circle as . An approximation for as a decimal is 3.14. An approximation for as a fraction is .
- Develop the relationship between the circumference and length of the radius of the circle using what you know about the diameter. Write the relationships algebraically.
- Determine the circumference of a circle given either the length of the diameter or the length of the radius. Use either 3.14 or as an approximation for . Select examples for students that require them to choose the value for that makes computation most efficient.
- The activity can be extended to include unknown objects in which students are given one measurement and must determine the others.
Great resource to teach parts of a circle: Sir Cumference and the First Round Table, Cindy Neuschwander, Wayne Geehan, Charlesbridge Publishing, 1997. | http://www.mdk12.org/instruction/lessons/mathematics/grade6/2A2d.html | 13 |
57 | The Curl Operator
What does the curl operator in the 3rd and 4th Maxwell's Equations mean? What exactly is the meaning of the del symbol with an x next to it, as seen in Equation ?
The curl is a measure of the rotation of a vector field. To understand this, we will again use the analogy of flowing water to represent a vector function (or vector field). In Figure 1, we have a vector function (V) and we want to know if the field is rotating at the point D (that is, we want to know if the curl is zero).
Figure 1. Example of a Vector Field Surrounding a Point.
To determine if the field is rotating, imagine a water wheel at the point D. If the vector field representing water flow would rotate the water wheel, then the curl is not zero:
Figure 2. Example of a Vector Field Surrounding a Water Wheel Producing Rotation.
In Figure 2, we can see that the water wheel would be rotating in the counter-clockwise direction. Hence, this vector field would have a curl at the point D.
We must now make things more complicated. Is the curl of Figure 2 positive or negative, and in what direction? Because we are observing the curl that rotates the water wheel in the x-y plane, the direction of the curl is taken to be the z-axis (perpendicular to plane of the water wheel). In addition, the curl follows the right-hand rule: if your thumb points in the +z-direction, then your right hand will curl around the axis in the direction of positive curl. For Figure 2, the curl would be positive if the water wheel spins in a counter clockwise manner. The curl would be negative if the water wheel spins in the clockwise direction.
In Figure 2, the water wheel rotates in the clockwise direction. Hence, the z-component of the curl for the vector field in Figure 1 is negative.
As you can imagine, the curl has x- and y-components as well. Hence, the curl operates on a vector field and the result is a 3-dimensional vector. That is, if we know a vector field then we can evaluate the curl at any point - and the result will be a vector (representing the x-, y- and z-directions).
Let's do another example with a new twist. Imagine that the vector field F in Figure 3 has z-directed fields. Let the symbol represent a vector in the +z-direction and the symbol represent a vector in the -z direction:
Figure 3. A Vector Field With Z-directed Energy - does the Wheel Rotate?.
Will the wheel rotate if the water is flowing up or down around it? The answer is no. Only x- and y- directed vectors can cause the wheel to rotate when the wheel is in the x-y plane. Hence, the z-directed vector fields can be ignored for determining the z-component of the curl.
Now, let's take more examples to make sure we understand the curl. What can we say about the curl of the vector field J at point G in Figure 4?
Figure 4. A Vector Field in the Y-Z Plane.
Is the curl positive, negative or zero in Figure 4? And in what direction is it? First, since the water wheel is in the y-z plane, the direction of the curl (if it is not zero) will be along the x-axis. Now, we want to know whether the curl is positive (counter-clockwise rotation) or if the curl is negative (clockwise rotation).
The red vector in Figure 4 is in the +y-direction. However, it will not rotate the water wheel, because it is directed directly at the center of the wheel and won't produce rotation. The green vector in Figure 4 will try to rotate the water wheel in the clockwise direction, but the black vector will try to rotate the water wheel in the counter-clockwise direction - therefore the green vector and the black vector cancel out and produce no rotation. However, the brown vector will rotate the water wheel in the counter clockwise direction. Hence, the net effect of all the vectors in Figure 4 is a counter-clockwise rotation. The result is that the curl in Figure 4 is positive and in the +x-direction.
In general, a vector field will have [x, y, z] components. The resulting curl is also a vector with [x, y, z] components. It is difficult to draw 3-D fields with water wheels in all 3-directions but if you understand the above examples you can generalize the 2-D ideas above to 3 dimensions. Now we'll present the full mathematical definition of the curl.
Mathematical Definition of the Curl
Let us say we have a vector field, A(x,y,z), and we would like to determine the curl. The vector field A is a 3-dimensional vector (with x-, y- and z- components). That is, we can write A as:
The curl of A is defined to be:
In Equation , is a unit vector in the +x-direction, is a unit vector in the +y-direction, and is a unit vector in the +z-direction (a unit vector is a vector with a magnitude equal to 1). The terms such as:
The rate of change operators are known as partial derivatives. For more information, see the partial derivative page.
As you can see, the curl is very complicated to write out. But the physical meaning can be understood intuitively from the above discussion. In words, Equation says:
So the curl is a measure of the rotation of a field, and to fully define the 3-dimensional rotation we get a 3-dimensional result (the curl in Equation ). Let's look at a mathematical example of a vector field and calculate the curl. Suppose we have a vector field H(x,y,z) given by:
Now, to get the curl of H in Equation , we need to compute all the partial derivatives below:
Using the results of Equation into the curl definition of Equation gives the curl of H:
So we have the curl of H in Equation . Note that the curl of H is also a vector function. As such, we can say that a new vector (we'll call it V) is the curl of H. Hence, V can be evaluated at any point in space (x,y,z). For instance, the x-component of V will always have Vx=-1. Similarly, Vy=-1. But Vz depends on x. Hence, V(3,4,0) will have Vz=0, but V(3,4, 0.5) will have Vz = 2*pi.
This gives about all the information you need to know about the curl. The important points to remember are that the curl operates on a vector function, and returns a vector function. The resulting curl is a measure of the rotation of the field in the 3 principal axis (x-, y-, z-). | http://www.maxwells-equations.com/curl/curl.php | 13 |
101 | Roger's ConnectionTM Magnetic Construction Toy
|| Home | User Manual | Gallery | Science | Ordering | Customer Service | Design Submissions ||
|WARNING FOR PARENTS: Parts of this lesson involve the use of scissors and sharp tools for making small holes in cardboard or posterboard. Please make sure that your child is either properly instructed not to use these tools, or is properly supervised as appropriate for their age, in order to maximize their safety. As a parent it is your responsibility to ensure that you control exposure of your child to instructions advising the use of potentially dangerous tools, and that you provide any age-appropriate supervision or instructions. It is our desire that children benefit from this material in a safe and productive way.|
|MESSAGE FOR CHILDREN: Some parts of these lessons involve the use of scissors and other sharp tools for making small holes in cardboard or posterboard. Do not use scissors, or any sharp tools to make the holes by yourself, unless you have permission from a parent! It is better and smarter to ask for help than to use something dangerous by yourself! Ask a parent to look at this web page with you to decide how you can safely use it. We want you to be safe while you learn!|
For Parents: This is a guided participatory lesson for young children about simple shapes and how strength may be added to structures by building with triangles. The lesson starts with two-dimensional shapes, and then proceeds to three-dimensional shapes. Parents, if working with your child, you may wish to break this material into several sessions, or exclude the more advanced material you feel may be beyond your child's grade level. However, you are encouraged to let your child seek their own level, and you may be surprised to discover how much of this material comes naturally to them! This material will be refined and developed into a lesson plan at a future date.
If used as the basis for a science project, the student can use this material to learn about the subject, and as source material to include in the final presentation. However, it will be the job of the student to incorporate these lessons and concepts into the format of a science project presentation as specified by the teacher.
Two-dimensional objects are flat. That is, they can be simply drawn on a piece of paper, or lie flat on a table top. Our world is filled with many two-dimensional shapes including circles, triangles, squares, and many others.
Many shapes, like this one, are too complicated to have a simple name.
When people build things, they use many different shapes, because every shape has special characteristics that are best suited for a particular purpose. For example, a wheel on a car, and a Ferris wheel, both use circles, because a circle turns nicely.
The metal or wood beams that hold up most houses and buildings form rectangles and squares. Here are two buildings where the walls have not yet been finished. You can easily see all of the squares and rectangles inside. A few of them have been highlighted with yellow.
Squares and rectangles always have four sides.
Triangles may be found in many bridges, and help to make them strong, as we will shortly see. Here are a couple of examples of bridges that have many triangles.
The Disney Epcot Center dome is made entirely of triangles, which keep it very strong.
Shapes that have sides that are all straight lines are called Polygons. You can draw polygons on a piece of paper. You can also build them out of wooden sticks, pipe cleaners, or other straight objects.
Polygons may have three, four, five, six, or more sides. Many of the simple polygons have been given names. For example a polygon with three sides is called a triangle. A polygon with four sides is called a square. A polygon with five sides is called a pentagon, because pent means five. Hex means six, so a hexagon has six sides. A polygon can be made of lines and be hollow, like these shapes which are simply drawn on a piece of paper.
You can also make polygons using Roger's Connection. Try building these shapes now. You can use any colors that you like.
What is an Angle?
Let's connect two lines together at one end like one of the examples in the picture below. In these examples, the two blue lines in each example are connected at the black dots. Let's leave the other ends of the blue lines free to move. You can see that the lines can change their positions with respect to one another, even though they stay connected at the black dot. If one of the lines stays in the same place, the other line can spin in a circle, just like the hands of a clock. The word Angle describes how far apart the two lines are from each other, in terms of how much of a full circle one line is separated from the other. In these three examples we show a small angle, a bigger angle, and a very big angle. There is a way of assigning a number to each angle, but we don't need to understand that idea right now.
The polygons on the left below don't look as neat as the ones we made just before. When all of the sides of a polygon are the same length, and the angles between each side are the same, then a polygon is called a Regular Polygon. We just learned about the word angle. Imagine that the steel balls below are like the black dots in the picture above when we discussed angles. Now you can see that each of the shapes below in the left picture contains a mixture of different angles. When each polygon is adjusted so that all of the angles within it are the same, then it is called a regular polygon. The polygons in the picture on the left are polygons, but they are NOT regular polygons! The polygons in the picture on the right ARE regular polygons.
Note: Earlier we said that a polygon with four sides was called a square, but now we can tell you that only a regular polygon with four sides is called a square. The blue four sided polygon in the picture on the left above is not a square. It is called a parallelogram. It has four sides of equal length, but all the angles are not the same. Two of the angles are small, and two are large. Parallel means going in the same direction. In the parallelogram, each pair of opposite sides is going in the same direction.
Polygons can also be filled in like these shapes cut from paper. Can you name each shape?
A polygon can never have curves in it, so this shape is NOT a polygon.
When you built the polygons earlier with Roger's Connection, did you notice anything special about the triangle? Did it seem like it was stronger than the other shapes? While all of the other polygons can be bent into many different forms that are NOT regular polygons (with many different angles in each polygon), the triangle always keeps the same shape. It is the strongest polygon. Why is that? The reason is because in all of the other polygons, all of the angles can change. There is nothing to stop them. However in the triangle, the angles can not change once the triangle is built.. The angles are fixed. This is because a triangle has three sides and three angles, and each angle is fixed by the side opposite to it. If you look at the following picture, you can see that there is only one angle where the two free sides can connect to the third side, and that once connected, all of the angles are fixed. Try this with your Roger's Connection triangle now. Then make the different polygons with four, five, and six sides, and see how the triangle is the only one that can't be adjusted into a different shape once it is made.
Now that you see how strong triangles are, and why they are special among polygons, you can start to understand why people build with triangle when great strength is needed, just like in the examples of the bridges and the Epcot dome shown in the earlier pictures. If those bridges had been made with only squares, they would not have been very strong at all.
We have seen how the triangle is strong, and how the other polygons can not hold their shape as easily. Here is a short movie showing how the square is not able to resist being changed.
Can a square be made stronger by adding triangles? The answer is yes. If you start with a square, you can add a diagonal between opposite corners to make it very strong. The word diagonal means something that goes between two opposite corners. By adding a diagonal, you actually make two triangles inside of the square. Since each triangle is strong, the new reinforced square is stronger as well. The word reinforced means to make stronger.
If you take your Roger's Connection square and try to make it stronger by adding a diagonal, you will find that you can only do it if you bend the square as shown below.
Here is a movie showing the same thing:
Adding the diagonal (the yellow magnetic rod in this picture) makes the shape stronger, but it is no longer a square. Why is that? The reason is because currently, the Roger's Connection magnetic rods only come in one length. If you wanted to keep the square shape and strengthen it with a Roger's Connection diagonal, you would need a longer yellow piece which we do not currently have!
|The shape you did make in the last picture above is called a parallelogram. Parallel means going in the same direction. In the parallelogram, each pair of opposite sides is going in the same direction. A parallelogram is a four-sided polygon where each of the four sides is the same length.|
However, there is a way you can show how to make a stronger square, using cardboard and paper fasteners instead of Roger's Connection. First, get some cardboard or posterboard, and four paper fasteners. Cut four pieces five and a half inches long and one inch wide. Cut another piece ten inches long and one inch wide. Carefully make holes in the ends of each short piece, centered, and one half inch from the ends. Make just one hole in one end of the long piece. Make the holes just big enough to allow the paper fasteners to fit. Ask a parent to help you cut the cardboard or posterboard, and make the holes, so that this can be done safely. Do not use scissors or any sharp tools to make the holes by yourself unless you have permission from a parent! It is better and smarter to ask for help than to use something dangerous by yourself! Here is what these pieces should look like when you are finished. This picture also shows the paper fasteners you will need.
Next, use the paper fasteners and four of the five pieces to make a square as shown.
Hold two of the opposite (diagonal) corners and move them back and forth to see how flexible and unsupported this square is, as shown in the movie below.
Just in case you can't see the movie, here are some pictures of some of the different ways the square can be distorted. The word distorted means that something has been changed from how it was in it's original natural state.
Next, remove one of the fasteners and add the fifth long piece as shown below, reconnecting the corner. Note that the long piece doesn't have a second hole at the other end yet. Also note that the long piece is covering up one of the fasteners at the bottom right of the picture. At this point, you can still adjust the square to many different shapes (many different parallelograms). Adjust the pieces so that they look like the picture below.
Now we want to make the square strong by using triangles. Take a pencil or pen, and make a small dot on the bottom right of the long piece, just above the hidden fastener underneath. In the picture below, the dot is blue.
Next, again with the help of a parent if you don't have permission to use a sharp tool by yourself, make a hole like the others at the dot you just made in the long piece. Then, remove the fastener underneath, and put that corner back together, now also going through the hole in the long piece that you just made. When you are done it should look like this:
Finally, if you like, you can cut off the extra length of the long piece as shown below to make it neater:
Now try to adjust the square back into a parellelogram. Right away, you can see that the square is much stronger, and you can no longer move it as before (unless you bend the paper). This idea of making weak squares much stronger is used in the real world all of the time. Look again at the picture of the second bridge. You will see many shapes that are similar to squares and rectangles that have been made much stronger using exactly the same technique that we just used above. (A technique is a way of doing something.) In this technique, we added a diagonal that connects between two opposite corners of a square. This method works just as well for rectangles. This is one example of special knowledge that engineers and designers use to build things in the real world, like buildings, bridges, and airplanes, that are strong and safe to protect the people that use them. Remember all of the squares and rectangles in the earlier pictures of buildings being constructed? Here is a picture of a similar building, in which triangles have been added to make the building stronger, exactly as we have made squares stronger by adding a diagonal.
By building with Roger's Connection, you will naturally learn this principle, and discover that many of your designs can be made stronger by building with triangles. You will also discover many other important ideas about shapes and how they are made strong so that they can be used in practical ways in the real world.
We can make a square stronger in another way. Here we start with a four-sided polygon, a parallelogram, and show how it can be distorted. Like before, it doesn't start out very strong. Next, we adjust the parallelogram to make a square. And finally, we add the four yellow magnetic rods to make a little pyramid. This has made the square stronger by using triangles in another way. You will find that there are many ways to use triangles to make shapes stronger. In this example, we have also built our first three-dimensional shape. Two-dimensional shapes lie flat on a table or on a piece of paper, while three-dimensional shapes rise up above the table as in the pyramid below.
Can other polygons can be made stronger using triangles? Yes. Using Roger's Connection, we can make a make a five-sided polygon stronger in a similar way to what we just did with a four-sided polygon. First we make a five-sided polygon and show how easily it can be changed, and is not very strong. Next we adjust it to be regular polygon, a pentagon, as shown in the middle picture below. And finally we add five more magnetic rods shown in yellow below. By creating these five triangles, we have made the pentagon much stronger. The final three-dimensional shape we just made is another kind of pyramid. The fancy name for this shape is a pentagonal pyramid. Pent means five, and the pentagon has five sides.
Now let's try this with a six-sided polygon. First we show how the six-sided polygon is not very strong by itself. Then we adjust it to make it a regular polygon, a hexagon. And finally we add six magnetic rods which makes the hexagon much stronger by forming six triangles.
Did you notice something different about this shape? When we added the six magnetic rods the shape stayed flat on the table, and remained a two-dimensional shape instead of becoming a three-dimensional shape. The hexagon is a special shape that has this unique property. Hexagons can also contain circles very neatly, by placing one in each triangle.
Hexagons also fit together very nicely when put side-by-side, into an arrangement called a hexagonal grid.
And this hexagonal grid can also contain circles in another very compact way.
Because of all of these special characteristics of hexagons, nature uses hexagons in many different way. In the photos below, you can see two examples. On the left, you can see the many hexagons in a honeycomb where bees store honey and take care of their young. In the picture on the right, you can see part of the eye of a fly as seen through a microscope. Each little bump is sensitive to light so the fly can see.
So far, we have been talking about shapes that are flat and can be drawn on a piece of paper, like the triangle, square, pentagon, and hexagon. But many things in the real world are not flat at all. How can we take the basic flat polygon shapes, and make three-dimensional shapes that aren't flat? This can be done very simply, or it can be very complicated. Let's take a look at some simple but interesting examples.
This is the simplest three-dimensional shape, called a tetrahedron. You have actually already built it in an earlier part of this lesson. Let's build it again, and discover several different ways to make it. Tetra means four, and the tetrahedron has four sides which are all triangles, so the tetrahedron is very strong. Click here to learn how to make the tetrahedron in several different ways. When you want to return to this web page to continue, click on the Back button at the top of your web browser. Have you made the tetrahedron? You will notice that when you pick up this shape and turn it in any direction, all of the triangles look the same, and you can't really tell which one you built first anymore (unless you used several colors). This shape is called a tetrahedron because tetra means four. The tetrahedron has four sides that are all triangles. Remember the word polygon? That referred to flat shapes. In the same way the word polyhedron, refers to a three-dimensional shapes - that is, a shape that is not flat, and has straight edges. With Roger's Connection, magnetic rods take the place of edges. Just as there are many kinds of polygons, there are also many kinds of polyhedra. The first and simplest polyhedron that you just made is called a tetrahedron. Because a tetrahedron is made entirely of triangles, it is very strong, and keeps its shape well. In the same way that triangles make strong flat shapes, they also make strong shapes in three-dimensions. The atoms inside a diamond are arranged as many connected tetrahedrons, and that is why a diamond is such a hard and strong material.
Can we build a polyhedron with square sides instead of triangles? Let's try, and build a cube. A cube is like a three-dimensional square, or a box shape.
Let's start with a square.
Next, add four more pieces like this:
Finally, add a top square like this.
If you have been successful, you have made what is called a cube. The cube is made of six squares. You probably discovered that this was not easy to do! The cube we made for this photograph wanted to fall apart very badly, and I had to steady it many times to make this picture! Remember how unsteady the square was? That unstable character shows itself again in the cube. Six unsteady squares combine to make a very unsteady cube! It is very easy to distort.
Now remember again how strong the tetrahedron was. Now you can see that building with triangles, even in three dimensions, results in very strong designs, and that building with squares results in much weaker designs. Often when building with squares, designers and engineers try to make them stronger by adding diagonals to form triangles.
Let's make another design using triangles that is also very strong, and a very attractive design too. Let's make a bigger tetrahedron. Here are step-by-step instructions. We will use different colors to make the steps easy to follow. First make this large triangle made of three smaller triangles.
Next, build up the red triangle into a tetrahedron.
Now, build up the blue triangle into a tetrahedron.
And now build up the green triangle into a tetrahedron.
Next, add three more pieces like these in yellow to build another triangle..
And finally, complete the yellow tetrahedron at the top.
Congratulations! This design is officially called a two-frequency tetrahedron. Two-frequency basically means that it is two levels high. If you had enough pieces, you could make a three-frequency or larger tetrahedron, which would also be very strong.
Click here to learn how to make the octahedron in several different ways. When you want to return to this web page to continue, click on the Back button at the top of your web browser. Have you made the octahedron? As you rotate the octahedron in your hands, notice that it contains three squares and eight triangles. Can you find them?This shape is called an octahedron because octa means eight. The tetrahedron has eight sides that are all triangles.
Let's create another design that uses triangles to make squares stronger. Let's make something called a truss. Here are the steps:
First make three connected squares as follows. You will find that they can be changed into a parallelogram as a group, and are not very strong.
Next add four blue magnetic rods as shown and connect them at the top.
Next add four yellow magnetic rods as shown and connect them at the top.
Next add four red magnetic rods as shown and connect them at the top.
Finally add two more purple magnetic rods as shown on the top. and you have finished building the truss!
Truss structures similar to the one above are used when designers need to build something that is long and strong and lightweight, like the trusses used in the space station design below.
Click here to learn how to make the icosahedron. When you want to return to this web page to continue, click on the Back button at the top of your web browser. Have you made the icosahedron? As you rotate the octahedron in your hands, notice that it contains twenty triangles. The beautiful icosahedron is more complicated and difficult to make than the others and will take some patience to build.
Using Roger's Connection, you can build many, many different shapes. If you remember to try to use triangles in your designs, then your designs will be strong. Without triangles, a design will be much weaker, or sometimes even impossible. One day you may be a designer or engineer and create things for other people to use that make use of some of the shapes we have been discussing. The lessons you learn today about these shapes and the use of triangles, may help you to create stronger and more successful designs in the future. Many other people have had to learn these lessons only from books. If you have followed these examples then you have a big advantage in having learned these lessons first-hand. This will help you remember these lessons much more easily!
Have fun, and may you create many wonderful and beautiful designs! | http://www.rogersconnection.com/triangles/ | 13 |
60 | Contents of this page
|We begin with the image you saw in the preceding lesson, showing the long form of the table with the "block" structure emphasized. You will recall that the two f blocks are written at the bottom merely to keep the table from becoming inconveniently wide; these two blocks actually go in between La-Hf and Ac-Db, respectively, in the d block.|
To understand how the periodic table is organized, imagine that we write down a long horizontal list of the elements in order of their increasing atomic number. It would begin this way:
H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca...
Now if we look at the various physical and chemical properties of these elements, we would find that their values tend to increase or decrease with Z in a manner that reveals a repeating pattern— that is, a periodicity. For the elements listed above, these breaks can be indicated by the vertical bars shown here in color:
H He | Li Be B C N O F Ne | Na Mg Al Si P S Cl Ar | Ca ...
Periods. To construct the table, we place each sequence in a separate row, which we call a period. The rows are aligned in such a way that the elements in each vertical column possess certain similarities. Thus the first short-period elements H and He are chemically similar to the elements Li and Ne at the beginning and end of the second period. The first period is split in order to place H above Li and He above Ne.
The "block" nomenclature shown above refers to the sub-orbital type (quantum number l, or s-p-d-f classification) of the highest-energy orbitals that are occupied in a given element. For n=1 there is no p block, and the s block is split so that helium is placed in the same group as the other inert gases, which it resembles chemically. For the second period (n=2) there is a p block but no d block; in the usual "long form" of the periodic table it is customary to leave a gap between these two blocks in order to accommodate the d blocks that occur at n=3 and above. At n=6 we introduce an f block, but in order to hold the table to reasonable dimensions the f blocks are placed below the main body of the table.
Groups. Each column of the periodic table is known as a group. The elements belonging to a given group bear a strong similarity in their chemical behaviors.
In the past, two different systems of Roman numerals and letters were used to denote the various groups. North Americans added the letter B to denote the d-block groups and A for the others; this is the system shown in the table above. The the rest of the world used A for the d-block elements and B for the others. In 1985, a new international system was adopted in which the columns were simply labeled 1-18. Although this system has met sufficient resistance in North America to slow its incorporation into textbooks, it seems likely that the "one to eighteen" system will gradually take over as older professors (the main hold-outs!) retire.
Families. Chemists have long found it convenient to refer to the elements of different groups, and in some cases of spans of groups by the names indicated in the table shown below. The two of these that are most important for you to know are the noble gases and the transition metals.
The properties of an atom depend ultimately on the number of electrons in the various orbitals, and on the nuclear charge which determines the compactness of the orbitals. In order to relate the properties of the elements to their locations in the periodic table, it is often convenient to make use of a simplified view of the atom in which the nucleus is surrounded by one or more concentric spherical "shells", each of which consists of the highest-principal quantum number orbitals (always s- and p-orbitals) that contain at least one electron. The shell model (as with any scientific model) is less a description of the world than a simplified way of looking at it that helps us to understand and correlate diverse phenomena. The principal simplification here is that it deals only with the main group elements of the s- and p-blocks, omitting the d- and f-block elements whose properties tend to be less closely tied to their group numbers.
|The electrons (denoted by the red dots) in the outer-most shell of an atom are the ones that interact most readily with other atoms, and thus play a major role in governing the chemistry of an element. Notice the use of noble-gas symbols to simplify the electron-configuration notation.|
In particular, the number of outer-shell electrons (which is given by the rightmost digit in the group number) is a major determinant of an element's "combining power", or valence. The general trend is for an atom to gain or lose electrons, either directly (leading to formation of ions) or by sharing electrons with other atoms so as to achieve an outer-shell configuration of s2p6. This configuration, known as an octet, corresponds to that of one of the noble-gas elements of Group 18.
The above diagram shows the first three rows of what are known as the representative elements— that is, the s- and p-block elements only. As we move farther down (into the fourth row and below), the presence of d-electrons exerts a complicating influence which allows elements to exhibit multiple valances. This effect is especially noticeable in the transition-metal elements, and is the reason for not including the d-block with the representative elements at all.
Those electrons in the outmost or valence shell are especially important because they are the ones that can engage in the sharing and exchange that is responsible for chemical reactions; how tightly they are bound to the atom determines much of the chemistry of the element. The degree of binding is the result of two opposing forces: the attraction between the electron and the nucleus, and the repulsions between the electron in question and all the other electrons in the atom. All that matters is the net force, the difference between the nuclear attraction and the totality of the electron-electron repulsions.
We can simplify the shell model even further by imagining that the valence shell electrons are the only electrons in the atom, and that the nuclear charge has whatever value would be required to bind these electrons as tightly as is observed experimentally. Because the number of electrons in this model is less than the atomic number Z, the required nuclear charge will also be smaller. and is known as the effective nuclear charge. Effective nuclear charge is essentially the positive charge that a valence electron "sees".
Part of the difference between Z and Zeffective is due to other electrons in the valence shell, but this is usually only a minor contributor because these electrons tend to act as if they are spread out in a diffuse spherical shell of larger radius. The main actors here are the electrons in the much more compact inner shells which surround the nucleus and exert what is often called a shielding or "screening" effect on the valence electrons.
The formula for calculating effective nuclear charge is not very complicated, but we will skip a discussion of it here. An even simpler although rather crude procedure is to just subtract the number of inner-shell electrons from the nuclear charge; the result is a form of effective nuclear charge which is called the core charge of the atom.
The concept of "size" is somewhat ambiguous when applied to the scale of atoms and molecules. The reason for this is apparent when you recall that an atom has no definite boundary; there is a finite (but very small) probability of finding the electron of a hydrogen atom, for example, 1 cm, or even 1 km from the nucleus. It is not possible to specify a definite value for the radius of an isolated atom; the best we can do is to define a spherical shell within whose radius some arbitrary percentage of the electron density can be found.
When an atom is combined with other atoms in a solid element or compound, an effective radius can be determined by observing the distances between adjacent rows of atoms in these solids. This is most commonly carried out by X-ray scattering experiments. Because of the different ways in which atoms can aggregate together, several different kinds of atomic radii can be defined.
Distances on the atomic scale have traditionally been expressed in Ångstrom units (1Å = 10–8 cm = 10–10 m), but nowadays the picometer is preferred;
A rough idea of the size of a metallic atom can be obtained simply by measuring the density of a sample of the metal. This tells us the number of atoms per unit volume of the solid. The atoms are assumed to be spheres of radius r in contact with each other, each of which sits in a cubic box of edge length 2r. The volume of each box is just the total volume of the solid divided by the number of atoms in that mass of the solid; the atomic radius is the cube root of r.
Although the radius of an atom or ion cannot be measured directly, in most cases it can be inferred from measurements of the distance between adjacent nuclei in a crystalline solid. This is most commonly carried out by X-ray scattering experiments. Because such solids fall into several different classes, several kinds of atomic radius are defined. Many atoms have several different radii; for example, sodium forms a metallic solid and thus has a metallic radius, it forms a gaseous molecule Na2 in the vapor phase (covalent radius), and of course it forms ionic solids such as NaCl.
Metallic radius is half the distance between nuclei in a metallic crystal.
Covalent radius is half the distance between like atoms that are bonded together in a molecule.
van der Waals radius is the effective radius of adjacent atoms which are not chemically bonded in a solid, but are presumably in "contact". An example would be the distance between the iodine atoms of adjacent I2 molecules in crystalline iodine.
Ionic radius is the effective radius of ions in solids such as NaCl. It is easy enough to measure the distance between adjacent rows of Na+ and Cl– ions in such a crystal, but there is no unambiguous way to decide what portions of this distance are attributable to each ion. The best one can do is make estimates based on studies of several different ionic solids (LiI, KI, NaI, for example) that contain one ion in common. Many such estimates have been made, and they turn out to be remarkably consistent.
The lithium ion is sufficiently small that in LI, the iodide ions are in contact, so I-I distances are twice the ionic radius of I–. This is not true for KI, but in this solid, adjacent potassium and iodide ions are in contact, allowing estimation of the K+ ionic radius.
Many atoms have several different radii; for example, sodium forms a metallic solid and thus has a metallic radius, it forms a gaseous molecule Na2 in the vapor phase (covalent radius), and of course it forms ionic solids as mentioned above.
We would expect the size of an atom to depend mainly on the principal quantum number of the highest occupied orbital; in other words, on the "number of occupied electron shells". Since each row in the periodic table corresponds to an increment in n, atomic radius increases as we move down a column. The other important factor is the nuclear charge; the higher the atomic number, the more strongly will the electrons be drawn toward the nucleus, and the smaller the atom. This effect is responsible for the contraction we observe as we move across the periodic table from left to right.
The figure shows a periodic table in which the sizes of the atoms are represented graphically. The apparent discontinuities in this diagram reflect the difficulty of comparing the radii of atoms of metallic and nonmetallic bonding types. Radii of the noble gas elements are estimates from those of nearby elements.
A positive ion is always smaller than the neutral atom, owing to the diminished electron-electron repulsion. If a second electron is lost, the ion gets even smaller; for example, the ionic radius of Fe2+ is 76 pm, while that of Fe3+ is 65 pm. If formation of the ion involves complete emptying of the outer shell, then the decrease in radius is especially great.
The hydrogen ion H+ is in a class by itself; having no electron cloud at all, its radius is that of the bare proton, or about 0.1 pm— a contraction of 99.999%! Because the unit positive charge is concentrated into such a small volume of space, the charge density of the hydrogen ion is extremely high; it interacts very strongly with other matter, including water molecules, and in aqueous solution it exists only as the hydronium ion H3O+.
Negative ions are always larger than the parent ion; the addition of one or more electrons to an existing shell increases electron-electron repulsion which results in a general expansion of the atom.
An isoelectronic series is a sequence of species all having the same number of electrons (and thus the same amount of electron-electron repulsion) but differing in nuclear charge. Of course, only one member of such a sequence can be a neutral atom (neon in the series shown below.) The effect of increasing nuclear charge on the radius is clearly seen.
Chemical reactions are based largely on the interactions between the most loosely bound electrons in atoms, so it is not surprising that the tendency of an atom to gain, lose or share electrons is one of its fundamental chemical properties.
This term always refers to the formation of positive ions. In order to remove an electron from an atom, work must be done to overcome the electrostatic attraction between the electron and the nucleus; this work is called the ionization energy of the atom and corresponds to the exothermic process
M(g) → M+(g) + e–
in which M(g) stands for any isolated (gaseous) atom.
An atom has as many ionization energies as it has electrons. Electrons are always removed from the highest-energy occupied orbital. An examination of the successive ionization energies of the first ten elements (below) provides experimental confirmation that the binding of the two innermost electrons (1s orbital) is significantly different from that of the n=2 electrons.Successive ionization energies of an atom increase rapidly as reduced electron-electron repulsion causes the electron shells to contract, thus binding the electrons even more tightly to the nucleus.
Successive ionizations of the first ten elements
Note the very large jumps in the energies required to remove electrons from the 1s orbitals of atoms of the second-row elements Li-Ne.
Ionization energies increase with the nuclear charge Z as we move across the periodic table. They decrease as we move down the table because in each period the electron is being removed from a shell one step farther from the nucleus than in the atom immediately above it. This results in the familiar zig-zag lines when the first ionization energies are plotted as a function of Z.
|This more detailed plot of the ionization energies of the atoms of the first ten elements reveals some interesting irregularities that can be related to the slightly lower energies (greater stabilities) of electrons in half-filled (spin-unpaired) relative to completely-filled subshells.|
Finally, a more comprehensive survey of the ionization energies of the main group elements is shown below.
Some points to note:
Formation of a negative ion occurs when an electron from some external source enters the atom and become incorporated into the lowest energy orbital that possesses a vacancy. Because the entering electron is attracted to the positive nucleus, the formation of negative ions is usually exothermic. The energy given off is the electron affinity of the atom. For some atoms, the electron affinity appears to be slightly negative, suggesting that electron-electron repulsion is the dominant factor in these instances.
In general, electron affinities tend to be much smaller than ionization energies, suggesting that they are controlled by opposing factors having similar magnitudes. These two factors are, as before, the nuclear charge and electron-electron repulsion. But the latter, only a minor actor in positive ion formation, is now much more significant. One reason for this is that the electrons contained in the inner shells of the atom exert a collective negative charge that partially cancels the charge of the nucleus, thus exerting a so-called shielding effect which diminishes the tendency for negative ions to form.
Because of these opposing effects, the periodic trends in electron affinities are not as clear as are those of ionization energies. This is particularly evident in the first few rows of the periodic table, in which small effects tend to be magnified anyway because an added electron produces a large percentage increase in the number of electrons in the atom.
In general, we can say that electron affinities become more exothermic as we move from left to right across a period (owing to increased nuclear charge and smaller atom size). There are some interesting irregularities, however:
When two elements are joined in a chemical bond, the element that attracts the shared electrons more strongly is more electronegative. Elements with low electronegativities (the metallic elements) are said to be electropositive.
Moreover, the same atom can exhibit different electronegativities in different chemical environments, so the "electronegativity of an element" is only a general guide to its chemical behavior rather than an exact specification of its behavior in a particular compound. Nevertheless, electronegativity is eminently useful in summarizing the chemical behavior of an element. You will make considerable use of electronegativity when you study chemical bonding and the chemistry of the individual elements.
Because there is no single definition of electronegativity, any numerical scale for measuring it must of necessity be somewhat arbitrary. Most such scales are themselves based on atomic properties that are directly measurable and which relate in one way or the other to electron-attracting propensity. The most widely used of these scales was devised by Linus Pauling and is related to ionization energy and electron affinity. The Pauling scale runs from 0 to 4; the highest electron affinity, 4.0, is assigned to fluorine, while cesium has the lowest value of 0.7. Values less than about 2.2 are usually associated with electropositive, or metallic character. In the representation of the scale shown in figure, the elements are arranged in rows corresponding to their locations in the periodic table. The correlation is obvious; electronegativity is associated with the higher rows and the rightmost columns.
The location of hydrogen on this scale reflects some of the significant chemical properties of this element. Although it acts like a metallic element in many respects (forming a positive ion, for example), it can also form hydride-ion (H–) solids with the more electropositive elements, and of course its ability to share electrons with carbon and other p-block elements gives rise to a very rich chemistry, including of course the millions of organic compounds.
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially imortant that you know the precise meanings of all the highlighted terms in the context of this topic.
There are hundreds of periodic table sites on the Web. Here are some that are especially worth knowing about.
WebElements (Mark Winters, Sheffield U., UK) The elements in this online periodic table are linked to an extensive variety of chemical and physical data as well as background, crystallographic, nuclear, electronic, biological and geological information. You can ever hear how the Brits prounounce the name of the element!
ChemiCool Periodic Table (MIT) is a nice-looking table in which you can click on individual elements to bring up more information.
It's Elemental - this is not so much a periodic table as a set of links to a set of excellent articles that focus on the history and uses of the different elements, each written by someone having a special knowlege or interest in a particular element. These articles appeared in an issue of Chemical & Engineering News that celebrated the 80th anniversary of that publication.
The Pictorial Periodic Table. This Phoenix College website is an interactive periodic table with a comprehensive database of element properties and isotopes, which can be searched and collated in novel and useful ways. Pictures of elements, periodic table art, music and educational games are available.
ChemistryCoach's Periodic Table links - a huge but well-organized list of every possible kind of specialized periodic table you can think of, as well as games, software, etc.
Main Group Elements - a somewhat more advanced comparison of these especially important elements.
Mark Leach has assembled a very well-done and interesting page of historic and alternative periodic tables.
Comic Book periodic table (John Selegue and Jim Holler, U. Kentucky). - if both comics and chemistry are important in your life, you'll love this!
A periodic table for your Palm or Win-CE handheld computer; download from this site.
Tom Lehrer's Element Song lyrics
Periodic tables to wear - Chemistry-related apparel including periodic-table T-shirts, neckties, etc. | http://www.chem1.com/acad/webtext/atoms/atpt-6.html | 13 |
186 | Spacecraft propulsion is any method used to accelerate spacecraft and artificial satellites. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. However, most spacecraft today are propelled by forcing a gas from the back/rear of the vehicle at very high speed through a supersonic de Laval nozzle. This sort of engine is called a rocket engine.
All current spacecraft use chemical rockets (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north-south stationkeeping and orbit raising. Interplanetary vehicles mostly use chemical rockets as well, although a few have used ion thrusters and Hall effect thrusters (two different types of electric propulsion) to great success.
Artificial satellites must be launched into orbit and once there they must be placed in their nominal orbit. Once in the desired orbit, they often need some form of attitude control so that they are correctly pointed with respect to the Earth, the Sun, and possibly some astronomical object of interest. They are also subject to drag from the thin atmosphere, so that to stay in orbit for a long period of time some form of propulsion is occasionally necessary to make small corrections (orbital stationkeeping). Many satellites need to be moved from one orbit to another from time to time, and this also requires propulsion. A satellite's useful life is over once it has exhausted its ability to adjust its orbit.
Spacecraft designed to travel further also need propulsion methods. They need to be launched out of the Earth's atmosphere just as satellites do. Once there, they need to leave orbit and move around.
For interplanetary travel, a spacecraft must use its engines to leave Earth orbit. Once it has done so, it must somehow make its way to its destination. Current interplanetary spacecraft do this with a series of short-term trajectory adjustments. In between these adjustments, the spacecraft simply falls freely along its trajectory. The most fuel-efficient means to move from one circular orbit to another is with a Hohmann transfer orbit: the spacecraft begins in a roughly circular orbit around the Sun. A short period of thrust in the direction of motion accelerates or decelerates the spacecraft into an elliptical orbit around the Sun which is tangential to its previous orbit and also to the orbit of its destination. The spacecraft falls freely along this elliptical orbit until it reaches its destination, where another short period of thrust accelerates or decelerates it to match the orbit of its destination. Special methods such as aerobraking or aerocapture are sometimes used for this final orbital adjustment.
Some spacecraft propulsion methods such as solar sails provide very low but inexhaustible thrust; an interplanetary vehicle using one of these methods would follow a rather different trajectory, either constantly thrusting against its direction of motion in order to decrease its distance from the Sun or constantly thrusting along its direction of motion to increase its distance from the Sun. The concept has been successfully tested by the Japanese IKAROS solar sail spacecraft.
Spacecraft for interstellar travel also need propulsion methods. No such spacecraft has yet been built, but many designs have been discussed. Since interstellar distances are very great, a tremendous velocity is needed to get a spacecraft to its destination in a reasonable amount of time. Acquiring such a velocity on launch and getting rid of it on arrival will be a formidable challenge for spacecraft designers.
When in space, the purpose of a propulsion system is to change the velocity, or v, of a spacecraft. Since this is more difficult for more massive spacecraft, designers generally discuss momentum, mv. The amount of change in momentum is called impulse. So the goal of a propulsion method in space is to create an impulse.
When launching a spacecraft from the Earth, a propulsion method must overcome a higher gravitational pull to provide a positive net acceleration. In orbit, any additional impulse, even very tiny, will result in a change in the orbit path.
The rate of change of velocity is called acceleration, and the rate of change of momentum is called force. To reach a given velocity, one can apply a small acceleration over a long period of time, or one can apply a large acceleration over a short time. Similarly, one can achieve a given impulse with a large force over a short time or a small force over a long time. This means that for maneuvering in space, a propulsion method that produces tiny accelerations but runs for a long time can produce the same impulse as a propulsion method that produces large accelerations for a short time. When launching from a planet, tiny accelerations cannot overcome the planet's gravitational pull and so cannot be used.
The Earth's surface is situated fairly deep in a gravity well. The escape velocity required to get out of it is 11.2 kilometers/second. As human beings evolved in a gravitational field of 1g (9.8 m/s²), an ideal propulsion system would be one that provides a continuous acceleration of 1g (though human bodies can tolerate much larger accelerations over short periods). The occupants of a rocket or spaceship having such a propulsion system would be free from all the ill effects of free fall, such as nausea, muscular weakness, reduced sense of taste, or leaching of calcium from their bones.
The law of conservation of momentum means that in order for a propulsion method to change the momentum of a space craft it must change the momentum of something else as well. A few designs take advantage of things like magnetic fields or light pressure in order to change the spacecraft's momentum, but in free space the rocket must bring along some mass to accelerate away in order to push itself forward. Such mass is called reaction mass.
In order for a rocket to work, it needs two things: reaction mass and energy. The impulse provided by launching a particle of reaction mass having mass m at velocity v is mv. But this particle has kinetic energy mv²/2, which must come from somewhere. In a conventional solid, liquid, or hybrid rocket, the fuel is burned, providing the energy, and the reaction products are allowed to flow out the back, providing the reaction mass. In an ion thruster, electricity is used to accelerate ions out the back. Here some other source must provide the electrical energy (perhaps a solar panel or a nuclear reactor), while the ions provide the reaction mass.
When discussing the efficiency of a propulsion system, designers often focus on effectively using the reaction mass. Reaction mass must be carried along with the rocket and is irretrievably consumed when used. One way of measuring the amount of impulse that can be obtained from a fixed amount of reaction mass is the specific impulse, the impulse per unit weight-on-Earth (typically designated by ). The unit for this value is seconds. Since the weight on Earth of the reaction mass is often unimportant when discussing vehicles in space, specific impulse can also be discussed in terms of impulse per unit mass. This alternate form of specific impulse uses the same units as velocity (e.g. m/s), and in fact it is equal to the effective exhaust velocity of the engine (typically designated ). Confusingly, both values are sometimes called specific impulse. The two values differ by a factor of gn, the standard acceleration due to gravity 9.80665 m/s² ().
A rocket with a high exhaust velocity can achieve the same impulse with less reaction mass. However, the energy required for that impulse is proportional to the exhaust velocity, so that more mass-efficient engines require much more energy, and are typically less energy efficient. This is a problem if the engine is to provide a large amount of thrust. To generate a large amount of impulse per second, it must use a large amount of energy per second. So high-mass-efficient engines require enormous amounts of energy per second to produce high thrusts. As a result, most high-mass-efficient engine designs also provide lower thrust due to the unavailability of high amounts of energy.
Propulsion methods can be classified based on their means of accelerating the reaction mass. There are also some special methods for launches, planetary arrivals, and landings.
Reaction engines
A reaction engine is an engine which provides propulsion by expelling reaction mass, in accordance with Newton's third law of motion. This law of motion is most commonly paraphrased as: "For every action force there is an equal, but opposite, reaction force".
Examples include both duct engines and rocket engines, and more uncommon variations such as Hall effect thrusters, ion drives and mass drivers. Duct engines are obviously not used for space propulsion due to the lack of air; however some proposed spacecraft have these kinds of engines to assist takeoff and landing.
Delta-v and propellant
Exhausting the entire usable propellant of a spacecraft through the engines in a straight line in free space would produce a net velocity change to the vehicle; this number is termed 'delta-v' ().
If the exhaust velocity is constant then the total of a vehicle can be calculated using the rocket equation, where M is the mass of propellant, P is the mass of the payload (including the rocket structure), and is the velocity of the rocket exhaust. This is known as the Tsiolkovsky rocket equation:
For historical reasons, as discussed above, is sometimes written as
For a high delta-v mission, the majority of the spacecraft's mass needs to be reaction mass. Since a rocket must carry all of its reaction mass, most of the initially-expended reaction mass goes towards accelerating reaction mass rather than payload. If the rocket has a payload of mass P, the spacecraft needs to change its velocity by , and the rocket engine has exhaust velocity ve, then the mass M of reaction mass which is needed can be calculated using the rocket equation and the formula for :
For much smaller than ve, this equation is roughly linear, and little reaction mass is needed. If is comparable to ve, then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass.
For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example a launch mission to low Earth orbit requires about 9.3–10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer.
Some effects such as Oberth effect can only be significantly utilised by high thrust engines such as rockets, i.e. engines that can produce a high g-force (thrust per unit mass, equal to delta-v per unit time).
Power use and propulsive efficiency
For all reaction engines (such as rockets and ion drives) some energy must go into accelerating the reaction mass. Every engine will waste some energy, but even assuming 100% efficiency, to accelerate an exhaust the engine will need energy amounting to
This energy is not necessarily lost- some of it usually ends up as kinetic energy of the vehicle, and the rest is wasted in residual motion of the exhaust.
Comparing the rocket equation (which shows how much energy ends up in the final vehicle) and the above equation (which shows the total energy required) shows that even with 100% engine efficiency, certainly not all energy supplied ends up in the vehicle - some of it, indeed usually most of it, ends up as kinetic energy of the exhaust.
The exact amount depends on the design of the vehicle, and the mission. However there are some useful fixed points:
- if the is fixed, for a mission delta-v, there is a particular that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about ⅔ of the mission delta-v (see the energy computed from the rocket equation). Drives with a specific impulse that is both high and fixed such as Ion thrusters have exhaust velocities that can be enormously higher than this ideal for many missions.
- if the exhaust velocity can be made to vary so that at each instant it is equal and opposite to the vehicle velocity then the absolute minimum energy usage is achieved. When this is achieved, the exhaust stops in space and has no kinetic energy; and the propulsive efficiency is 100%- all the energy ends up in the vehicle (in principle such a drive would be 100% efficient, in practice there would be thermal losses from within the drive system and residual heat in the exhaust). However in most cases this uses an impractical quantity of propellant, but is a useful theoretical consideration. Anyway the vehicle has to move before the method can be applied.
Some drives (such as VASIMR or Electrodeless plasma thruster) actually can significantly vary their exhaust velocity. This can help reduce propellant usage or improve acceleration at different stages of the flight. However the best energetic performance and acceleration is still obtained when the exhaust velocity is close to the vehicle speed. Proposed ion and plasma drives usually have exhaust velocities enormously higher than that ideal (in the case of VASIMR the lowest quoted speed is around 15000 m/s compared to a mission delta-v from high Earth orbit to Mars of about 4000m/s).
It might be thought that adding power generation capacity is helpful, and while initially this can improve performance, this inevitably increases the weight of the power source, and eventually the mass of the power source and the associated engines and propellant dominates the weight of the vehicle, and then adding more power gives no significant improvement.
For, although solar power and nuclear power are virtually unlimited sources of energy, the maximum power they can supply is substantially proportional to the mass of the powerplant (i.e. specific power takes a largely constant value which is dependent on the particular powerplant technology). For any given specific power, with a large which is desirable to save propellant mass, it turns out that the maximum acceleration is inversely proportional to . Hence the time to reach a required delta-v is proportional to . Thus the latter should not be too large.
In the ideal case is useful payload and is reaction mass (this corresponds to empty tanks having no mass, etc.). The energy required can simply be computed as
This corresponds to the kinetic energy the expelled reaction mass would have at a speed equal to the exhaust speed. If the reaction mass had to be accelerated from zero speed to the exhaust speed, all energy produced would go into the reaction mass and nothing would be left for kinetic energy gain by the rocket and payload. However, if the rocket already moves and accelerates (the reaction mass is expelled in the direction opposite to the direction in which the rocket moves) less kinetic energy is added to the reaction mass. To see this, if, for example, =10 km/s and the speed of the rocket is 3 km/s, then the speed of a small amount of expended reaction mass changes from 3 km/s forwards to 7 km/s rearwards. Thus, while the energy required is 50 MJ per kg reaction mass, only 20 MJ is used for the increase in speed of the reaction mass. The remaining 30 MJ is the increase of the kinetic energy of the rocket and payload.
Thus the specific energy gain of the rocket in any small time interval is the energy gain of the rocket including the remaining fuel, divided by its mass, where the energy gain is equal to the energy produced by the fuel minus the energy gain of the reaction mass. The larger the speed of the rocket, the smaller the energy gain of the reaction mass; if the rocket speed is more than half of the exhaust speed the reaction mass even loses energy on being expelled, to the benefit of the energy gain of the rocket; the larger the speed of the rocket, the larger the energy loss of the reaction mass.
where is the specific energy of the rocket (potential plus kinetic energy) and is a separate variable, not just the change in . In the case of using the rocket for deceleration, i.e. expelling reaction mass in the direction of the velocity, should be taken negative.
The formula is for the ideal case again, with no energy lost on heat, etc. The latter causes a reduction of thrust, so it is a disadvantage even when the objective is to lose energy (deceleration).
If the energy is produced by the mass itself, as in a chemical rocket, the fuel value has to be , where for the fuel value also the mass of the oxidizer has to be taken into account. A typical value is = 4.5 km/s, corresponding to a fuel value of 10.1 MJ/kg. The actual fuel value is higher, but much of the energy is lost as waste heat in the exhaust that the nozzle was unable to extract.
The required energy is
- for we have
- for a given , the minimum energy is needed if , requiring an energy of
- In the case of acceleration in a fixed direction, and starting from zero speed, and in the absence of other forces, this is 54.4% more than just the final kinetic energy of the payload. In this optimal case the initial mass is 4.92 times the final mass.
These results apply for a fixed exhaust speed.
Due to the Oberth effect and starting from a nonzero speed, the required potential energy needed from the propellant may be less than the increase in energy in the vehicle and payload. This can be the case when the reaction mass has a lower speed after being expelled than before – rockets are able to liberate some or all of the initial kinetic energy of the propellant.
Also, for a given objective such as moving from one orbit to another, the required may depend greatly on the rate at which the engine can produce and maneuvers may even be impossible if that rate is too low. For example, a launch to LEO normally requires a of ca. 9.5 km/s (mostly for the speed to be acquired), but if the engine could produce at a rate of only slightly more than g, it would be a slow launch requiring altogether a very large (think of hovering without making any progress in speed or altitude, it would cost a of 9.8 m/s each second). If the possible rate is only or less, the maneuver can not be carried out at all with this engine.
The power is given by
where is the thrust and the acceleration due to it. Thus the theoretically possible thrust per unit power is 2 divided by the specific impulse in m/s. The thrust efficiency is the actual thrust as percentage of this.
If e.g. solar power is used this restricts ; in the case of a large the possible acceleration is inversely proportional to it, hence the time to reach a required delta-v is proportional to ; with 100% efficiency:
- for we have
- power 1000 W, mass 100 kg, = 5 km/s, = 16 km/s, takes 1.5 months.
- power 1000 W, mass 100 kg, = 5 km/s, = 50 km/s, takes 5 months.
Thus should not be too large.
Power to thrust ratio
The power to thrust ratio is simply:
Thus for any vehicle power P, the thrust that may be provided is:
Suppose we want to send a 10,000 kg space probe to Mars. The required from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. For the sake of argument, let us say that the following thrusters may be used:
|Engine||Effective Exhaust Velocity
|Energy per kg
|minimum power/thrust||Power generator mass/thrust*|
||1||100||190,000||95||500 kJ||0.5 kW/N||N/A|
||5||500||8,200||103||12.6 MJ||2.5 kW/N||N/A|
|Ion thruster||50||5,000||620||775||1.25 GJ||25 kW/N||25 kg/N|
* - assumes a specific power of 1kW/kg
Observe that the more fuel-efficient engines can use far less fuel; its mass is almost negligible (relative to the mass of the payload and the engine itself) for some of the engines. However, note also that these require a large total amount of energy. For Earth launch, engines require a thrust to weight ratio of more than one. To do this with the ion or more theoretical electrical drives, the engine would have to be supplied with one to several gigawatts of power — equivalent to a major metropolitan generating station. From the table it can be seen that this is clearly impractical with current power sources.
Alternative approaches include some forms of laser propulsion, where the reaction mass does not provide the energy required to accelerate it, with the energy instead being provided from an external laser or other beamed power system. Small models of some of these concepts have flown, although the engineering problems are complex and the ground based power systems are not a solved problem.
Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, and would be insufficient for launching from the Earth. However, over long periods in orbit where there is no friction, the velocity will be finally achieved. For example, it took the SMART-1 more than a year to reach the Moon, while with a chemical rocket it takes a few days. Because the ion drive needs much less fuel, the total launched mass is usually lower, which typically results in a lower overall cost, but takes longer.
Mission planning therefore frequently involves adjusting and choosing the propulsion system so as to minimise the total cost of the project, and can involve trading off launch costs and mission duration against payload fraction.
Rocket engines
Most rocket engines are internal combustion heat engines (although non combusting forms exist). Rocket engines generally produce a high temperature reaction mass, as a hot gas. This is achieved by combusting a solid, liquid or gaseous fuel with an oxidiser within a combustion chamber. The extremely hot gas is then allowed to escape through a high-expansion ratio nozzle. This bell-shaped nozzle is what gives a rocket engine its characteristic shape. The effect of the nozzle is to dramatically accelerate the mass, converting most of the thermal energy into kinetic energy. Exhaust speed reaching as high as 10 times the speed of sound at sea level are common.
Rocket engines provide essentially the highest specific powers and high specific thrusts of any engine used for spacecraft propulsion.
Ion propulsion rockets can heat a plasma or charged gas inside a magnetic bottle and release it via a magnetic nozzle, so that no solid matter need come in contact with the plasma. Of course, the machinery to do this is complex, but research into nuclear fusion has developed methods, some of which have been proposed to be used in propulsion systems, and some have been tested in a lab.
See rocket engine for a listing of various kinds of rocket engines using different heating methods, including chemical, electrical, solar, and nuclear.
Electromagnetic propulsion
Rather than relying on high temperature and fluid dynamics to accelerate the reaction mass to high speeds, there are a variety of methods that use electrostatic or electromagnetic forces to accelerate the reaction mass directly. Usually the reaction mass is a stream of ions. Such an engine typically uses electric power, first to ionize atoms, and then to create a voltage gradient to accelerate the ions to high exhaust velocities.
For these drives, at the highest exhaust speeds, energetic efficiency and thrust are all inversely proportional to exhaust velocity. Their very high exhaust velocity means they require huge amounts of energy and thus with practical power sources provide low thrust, but use hardly any fuel.
For some missions, particularly reasonably close to the Sun, solar energy may be sufficient, and has very often been used, but for others further out or at higher power, nuclear energy is necessary; engines drawing their power from a nuclear source are called nuclear electric rockets.
With any current source of electrical power, chemical, nuclear or solar, the maximum amount of power that can be generated limits the amount of thrust that can be produced to a small value. Power generation adds significant mass to the spacecraft, and ultimately the weight of the power source limits the performance of the vehicle.
Current nuclear power generators are approximately half the weight of solar panels per watt of energy supplied, at terrestrial distances from the Sun. Chemical power generators are not used due to the far lower total available energy. Beamed power to the spacecraft shows some potential.
Some electromagnetic methods:
- Ion thrusters (accelerate ions first and later neutralize the ion beam with an electron stream emitted from a cathode called a neutralizer)
- Electrothermal thrusters (electromagnetic fields are used to generate a plasma to increase the heat of the bulk propellant, the thermal energy imparted to the propellant gas is then converted into kinetic energy by a nozzle of either physical material construction or by magnetic means)
- Electromagnetic thrusters (ions are accelerated either by the Lorentz Force or by the effect of electromagnetic fields where the electric field is not in the direction of the acceleration)
- Mass drivers (for propulsion)
In electrothermal and electromagnetic thrusters, both ions and electrons are accelerated simultaneously, no neutralizer is required.
Without internal reaction mass
The law of conservation of momentum is usually taken to imply that any engine which uses no reaction mass cannot accelerate the center of mass of a spaceship (changing orientation, on the other hand, is possible). But space is not empty, especially space inside the Solar System; there are gravitation fields, magnetic fields, electromagnetic waves. solar wind and solar radiation. Electromagnetic waves in particular are known to contain momentum, despite being massless; specifically the momentum flux density P of an EM wave is quantitatively 1/c times the Poynting vector S, i.e. P = S/c, where c is the velocity of light. Field propulsion methods which do not rely on reaction mass thus must try to take advantage of this fact by coupling to a momentum-bearing field such as an EM wave that exists in the vicinity of the craft. However, since many of these phenomena are diffuse in nature, corresponding propulsion structures need to be proportionately large.
There are several different space drives that need little or no reaction mass to function. A tether propulsion system employs a long cable with a high tensile strength to change a spacecraft's orbit, such as by interaction with a planet's magnetic field or through momentum exchange with another object. Solar sails rely on radiation pressure from electromagnetic energy, but they require a large collection surface to function effectively. The magnetic sail deflects charged particles from the solar wind with a magnetic field, thereby imparting momentum to the spacecraft. A variant is the mini-magnetospheric plasma propulsion system, which uses a small cloud of plasma held in a magnetic field to deflect the Sun's charged particles. An E-sail would use very thin and lightweight wires holding an electric charge to deflect these particles, and may have more controllable directionality.
As a proof of concept, NanoSail-D became the first nanosatellite to orbit the Earth.[full citation needed] There are plans to add them[clarification needed] to future Earth orbit satellites, enabling them to de-orbit and burn up once they are no longer needed. Cube sail aims to tackle space junk.[full citation needed]
A satellite or other space vehicle is subject to the law of conservation of angular momentum, which constrains a body from a net change in angular velocity. Thus, for a vehicle to change its relative orientation without expending reaction mass, another part of the vehicle may rotate in the opposite direction. Non-conservative external forces, primarily gravitational and atmospheric, can contribute up to several degrees per day to angular momentum, so secondary systems are designed to "bleed off" undesired rotational energies built up over time. Accordingly, many spacecraft utilize reaction wheels or control moment gyroscopes to control orientation in space.
A gravitational slingshot can carry a space probe onward to other destinations without the expense of reaction mass. By harnessing the gravitational energy of other celestial objects, the spacecraft can pick up kinetic energy. However, even more energy can be obtained from the gravity assist if rockets are used.
Planetary and atmospheric propulsion
Launch mechanisms
High thrust is of vital importance for Earth launch. Thrust has to be greater than weight (see also gravity drag). Many of the propulsion methods above give a thrust/weight ratio of much less than 1, and so cannot be used for launch.
All current spacecraft use chemical rocket engines (bipropellant or solid-fuel) for launch. Other power sources such as nuclear have been proposed and tested, but safety, environmental and political considerations have so far curtailed their use.
One advantage that spacecraft have in launch is the availability of infrastructure on the ground to assist them. Proposed non-rocket spacelaunch ground-assisted launch mechanisms include:
- Space elevator (a geostationary tether to orbit)
- Launch loop (a very fast enclosed rotating loop about 80 km tall)
- Space fountain (a very tall building held up by a stream of masses fired from base)
- Orbital ring (a ring around the Earth with spokes hanging down off bearings)
- Hypersonic skyhook (a fast spinning orbital tether)
- Electromagnetic catapult (railgun, coilgun) (an electric gun)
- Rocket sled launch
- Space gun (Project HARP, ram accelerator) (a chemically powered gun)
- Beam-powered propulsion rockets and jets powered from ground via a beam
- High-altitude platforms to assist initial stage
- Orbital airship
Airbreathing engines
Studies generally show that conventional air-breathing engines, such as ramjets or turbojets are basically too heavy (have too low a thrust/weight ratio) to give any significant performance improvement when installed on a launch vehicle itself. However, launch vehicles can be air launched from separate lift vehicles (e.g. B-29, Pegasus Rocket and White Knight) which do use such propulsion systems. Jet engines mounted on a launch rail could also be so used.
On the other hand, very lightweight or very high speed engines have been proposed that take advantage of the air during ascent:
- SABRE - a lightweight hydrogen fuelled turbojet with precooler
- ATREX - a lightweight hydrogen fuelled turbojet with precooler
- Liquid air cycle engine - a hydrogen fuelled jet engine that liquifies the air before burning it in a rocket engine
- Scramjet - jet engines that use supersonic combustion
Normal rocket launch vehicles fly almost vertically before rolling over at an altitude of some tens of kilometers before burning sideways for orbit; this initial vertical climb wastes propellant but is optimal as it greatly reduces airdrag. Airbreathing engines burn propellant much more efficiently and this would permit a far flatter launch trajectory, the vehicles would typically fly approximately tangentially to the earth surface until leaving the atmosphere then perform a rocket burn to bridge the final delta-v to orbital velocity.
Planetary arrival and landing
When a vehicle is to enter orbit around its destination planet, or when it is to land, it must adjust its velocity. This can be done using all the methods listed above (provided they can generate a high enough thrust), but there are a few methods that can take advantage of planetary atmospheres and/or surfaces.
- Aerobraking allows a spacecraft to reduce the high point of an elliptical orbit by repeated brushes with the atmosphere at the low point of the orbit. This can save a considerable amount of fuel since it takes much less delta-V to enter an elliptical orbit compared to a low circular orbit. Since the braking is done over the course of many orbits, heating is comparatively minor, and a heat shield is not required. This has been done on several Mars missions such as Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter, and at least one Venus mission, Magellan.
- Aerocapture is a much more aggressive manoeuver, converting an incoming hyperbolic orbit to an elliptical orbit in one pass. This requires a heat shield and much trickier navigation, since it must be completed in one pass through the atmosphere, and unlike aerobraking no preview of the atmosphere is possible. If the intent is to remain in orbit, then at least one more propulsive maneuver is required after aerocapture—otherwise the low point of the resulting orbit will remain in the atmosphere, resulting in eventual re-entry. Aerocapture has not yet been tried on a planetary mission, but the re-entry skip by Zond 6 and Zond 7 upon lunar return were aerocapture maneuvers, since they turned a hyperbolic orbit into an elliptical orbit. On these missions, since there was no attempt to raise the perigee after the aerocapture, the resulting orbit still intersected the atmosphere, and re-entry occurred at the next perigee.
- a Ballute is an inflatable drag device
- Parachutes can land a probe on a planet or moon with an atmosphere, usually after the atmosphere has scrubbed off most of the velocity, using a heat shield.
- Airbags can soften the final landing.
- Lithobraking, or stopping by simply smashing into the target, is usually done by accident. However, it may be done deliberately with the probe expected to survive (see, for example, Deep Space 2), in which case very sturdy probes and low approach velocities are required.
Hypothetical methods
A variety of hypothetical propulsion techniques have been considered that would require entirely new principles of physics to be realized or that may not exist. To date, such methods are highly speculative and include:
- Diametric drive
- Pitch drive
- Bias drive
- Disjunction drive
- Alcubierre drive (a form of Warp drive)
- Differential sail
- Wormholes – theoretically possible, but unachieveable in practice with current technology
- Woodward effect
- Reactionless drives – breaks the law of conservation of momentum; theoretically impossible
- EmDrive – tries to circumvent the law of conservation of momentum; may be theoretically impossible
- Photon rocket
- A "hyperspace" drive based upon Heim theory
Table of methods
Below is a summary of some of the more popular, proven technologies, followed by increasingly speculative methods.
Four numbers are shown. The first is the effective exhaust velocity: the equivalent speed that the propellant leaves the vehicle. This is not necessarily the most important characteristic of the propulsion method; thrust and power consumption and other factors can be. However:
- if the delta-v is much more than the exhaust velocity, then exorbitant amounts of fuel are necessary (see the section on calculations, above)
- if it is much more than the delta-v, then, proportionally more energy is needed; if the power is limited, as with solar energy, this means that the journey takes a proportionally longer time
The second and third are the typical amounts of thrust and the typical burn times of the method. Outside a gravitational potential small amounts of thrust applied over a long period will give the same effect as large amounts of thrust over a short period. (This result does not apply when the object is significantly influenced by gravity.)
The fourth is the maximum delta-v this technique can give (without staging). For rocket-like propulsion systems this is a function of mass fraction and exhaust velocity. Mass fraction for rocket-like systems is usually limited by propulsion system weight and tankage weight. For a system to achieve this limit, typically the payload may need to be a negligible percentage of the vehicle, and so the practical limit on some systems can be much lower.
|Solid-fuel rocket||minutes||~ 7||9:Flight proven|
|Hybrid rocket||minutes||> 3||9:Flight proven|
|Monopropellant rocket||citation needed]1 – 3[||citation needed]0.1 – 100[||milliseconds-minutes||~ 3||9:Flight proven|
|Liquid-fuel rocket||minutes||~ 9||9:Flight proven|
|Electrostatic ion thruster||[full citation needed]15 – 210||months/years||> 100||9:Flight proven|
|Hall effect thruster (HET)||citation needed]8 - 50[||months/years||> 100||9:Flight proven|
|Resistojet rocket||2 - 6||10−2 - 10||minutes||?||8:Flight qualified|
|Arcjet rocket||4 - 16||10−2 - 10||minutes||?||citation needed]8:Flight qualified[|
|Field Emission Electric Propulsion (FEEP)||-130100||-10−310−6||months/years||?||8:Flight qualified|
|Pulsed plasma thruster (PPT)||~ 20||~ 0.1||~2,000–10,000 hours||?||7:Prototype demoed in space|
|Dual mode propulsion rocket||1 – 4.7||0.1 – 107||milliseconds-minutes||~ 3 – 9||7:Prototype demoed in space|
9/km2 @ 1 |
9:Light pressure attitude-control flight proven|
6:Deploy-only demoed in space
5:Light-sail validated in lit vacuum
|Tripropellant rocket||citation needed]2.5 - 5.3[||citation needed]0.1 - 107[||minutes||~ 9||6:Prototype demoed on ground|
|Magnetoplasmadynamic thruster (MPD)||20 - 100||100||weeks||?||6:Model-1 kW demoed in space|
|Nuclear thermal rocket||9||107||minutes||> ~ 20||6:Prototype demoed on ground|
|Mass drivers (for propulsion)||0 - ~30||104 - 108||months||?||6:Model-32MJ demoed on ground|
|Tether propulsion||N/A||1 - 1012||minutes||~ 7||6:Model-31.7 km demoed in space|
|Air-augmented rocket||5 - 6||0.1 - 107||seconds-minutes||> 7?||6:Prototype demoed on ground|
|Liquid air cycle engine||4.5||103 - 107||seconds-minutes||?||6:Prototype demoed on ground|
|Pulsed inductive thruster (PIT)||-8010||20||months||?||5:Component validated in vacuum|
|Variable Specific Impulse Magnetoplasma Rocket (VASIMR)||citation needed]10 - 300[||citation needed]40 - 1,200[||days - months||> 100||5:Component-200 kW validated in vacuum|
|Magnetic field oscillating amplified thruster||10 - 130||0.1 - 1||days - months||> 100||5:Component validated in vacuum|
|Solar thermal rocket||7 - 12||1 - 100||weeks||> ~ 20||4:Component validated in lab|
|Radioisotope rocket||citation needed]7 - 8[||1.3 - 1.5||months||?||4:Component validated in lab|
|Nuclear electric rocket(As electric prop. method used)||Variable||Variable||Variable||?||400kW validated in lab4:Component-|
|Orion Project (Near term nuclear pulse propulsion)||20 - 100||109 - 1012||several days||~30-60||3:Validated-900 kg proof-of-concept|
|Space elevator||N/A||N/A||indefinite||> 12||3:Validated proof-of-concept|
|Reaction Engines SABRE||30/4.5||0.1 - 107||minutes||9.4||3:Validated proof-of-concept|
|Magnetic sails||145-750:Wind||Mg70/40||indefinite||?||3:Validated proof-of-concept|
|Magnetic sail#Mini-magnetospheric plasma propulsion||200||~1 N/kW||months||?||3:Validated proof-of-concept|
|Beam-powered/Laser(As prop. method powered by beam)||Variable||Variable||Variable||?||3:Validated-71m proof-of-concept|
|Launch loop/Orbital ring||N/A||~104||minutes||>>11-30||Technology concept formulated2:|
|Nuclear pulse propulsion (Project Daedalus' drive)||20 - 1,000||109 - 1012||years||~15,000||2:Technology concept formulated|
|Gas core reactor rocket||10 - 20||103 - 106||?||?||2:Technology concept formulated|
|Nuclear salt-water rocket||100||103 - 107||half hour||?||2:Technology concept formulated|
|Fission sail||?||?||?||?||2:Technology concept formulated|
|Fission-fragment rocket||15,000||?||?||?||2:Technology concept formulated|
|Nuclear photonic rocket||299,790||10−5 - 1||years-decades||?||2:Technology concept formulated|
|Fusion rocket||citation needed]100 - 1,000[||?||?||?||2:Technology concept formulated|
|Antimatter catalyzed nuclear pulse propulsion||200 - 4,000||?||days-weeks||?||2:Technology concept formulated|
|Antimatter rocket||citation needed]10,000-100,000[||?||?||?||2:Technology concept formulated|
|Bussard ramjet||2.2 - 20,000||?||indefinite||~30,000||2:Technology concept formulated|
|Gravitoelectromagnetic toroidal launchers||clarification needed]299,790:GEM[||?||?||citation needed]<299790[||1:Basic principles observed & reported|
Spacecraft propulsion systems are often first statically tested on the Earth's surface, within the atmosphere but many systems require a vacuum chamber to test fully. Rockets are usually tested at a rocket engine test facility well away from habitation and other buildings for safety reasons. Ion drives are far less dangerous and require much less stringent safety, usually only a large-ish vacuum chamber is needed.
Famous static test locations can be found at Rocket Ground Test Facilities
Some systems cannot be adequately tested on the ground and test launches may be employed at a Rocket Launch Site.
See also
- Interplanetary travel
- Interstellar travel
- List of aerospace engineering topics
- List of rockets
- Magnetic sail
- Orbital maneuver
- Orbital mechanics
- Plasma propulsion engine
- Pulse detonation engine
- Rocket engine nozzles
- Solar sail
- Specific impulse
- Tsiolkovsky rocket equation
- Stochastic electrodynamics
- ^ With things moving around in orbits and nothing staying still, the question may be quite reasonably asked, stationary relative to what? The answer is for the energy to be zero (and in the absence of gravity which complicates the issue somewhat), the exhaust must stop relative to the initial motion of the rocket before the engines were switched on. It is possible to do calculations from other reference frames, but consideration for the kinetic energy of the exhaust and propellant needs to be given. In Newtonian mechanics the initial position of the rocket is the centre of mass frame for the rocket/propellant/exhaust, and has the minimum energy of any frame.
- Hess, M.; Martin, K. K.; Rachul, L. J. (February 7, 2002). "Thrusters Precisely Guide EO-1 Satellite in Space First". NASA. Archived from the original on 2007-12-06. Retrieved 2007-07-30.
- Phillips, Tony (May 30, 2000). "Solar S'Mores". NASA. Retrieved 2007-07-30.
- Olsen, Carrie (September 21, 1995). "Hohmann Transfer & Plane Changes". NASA. Archived from the original on 2007-07-15. Retrieved 2007-07-30.
- Staff (April 24, 2007). "Interplanetary Cruise". 2001 Mars Odyssey. NASA. Retrieved 2007-07-30.[dead link]
- Doody, Dave (February 7, 2002). "Chapter 4. Interplanetary Trajectories". Basics of Space Flight (NASA JPL). Retrieved 2007-07-30.
- Hoffman, S. (August 20–22, 1984). "A comparison of aerobraking and aerocapture vehicles for interplanetary missions". AIAA and AAS, Astrodynamics Conference. Seattle, Washington: American Institute of Aeronautics and Astronautics. pp. 25 p. Retrieved 2007-07-31.
- Anonymous (2007). "Basic Facts on Cosmos 1 and Solar Sailing". The Planetary Society. Retrieved 2007-07-26.
- Rahls, Chuck (December 7, 2005). "Interstellar Spaceflight: Is It Possible?". Physorg.com. Retrieved 2007-07-31.
- Zobel, Edward A. (2006). "Summary of Introductory Momentum Equations". Zona Land. Retrieved 2007-08-02.
- Benson, Tom. "Guided Tours: Beginner's Guide to Rockets". NASA. Retrieved 2007-08-02.
- equation 19-1 Rocket propulsion elements 7th edition- Sutton
- Choueiri, Edgar Y. (2004). "A Critical History of Electric Propulsion: The First 50 Years (1906–1956)". Journal of Propulsion and Power 20 (2): 193–203. doi:10.2514/1.9245.
- Drachlis, Dave (October 24, 2002). "NASA calls on industry, academia for in-space propulsion innovations". NASA. Retrieved 2007-07-26.
- NASA's Nanosail-D Becomes the First Solar Sail Spacecraft to Orbit the Earth | Inhabitat - Green Design Will Save the World
- Amos, Jonathan (2010-03-26). "Tiny cube will tackle space junk". BBC News.
- King-Hele, Desmond (1987). Satellite orbits in an atmosphere: Theory and application. Springer. ISBN 978-0-216-92252-5.
- [P.]; Shen, H.; Hall, C. D. (2001). "Satellite attitude control and power tracking with energy/momentum wheels". Journal of Guidance, Control, and Dynamics 43 (1): 23–34. doi:10.2514/2.4705. ISSN 0731-5090.
- Dykla, J. J.; Cacioppo, R.; Gangopadhyaya, A. (2004). "Gravitational slingshot". American Journal of Physics 72 (5): 619–000. Bibcode:2004AmJPh..72..619D. doi:10.1119/1.1621032.
- Anonymous (2006). "The Sabre Engine". Reaction Engines Ltd. Retrieved 2007-07-26.
- Harada, K.; Tanatsugu, N.; Sato, T. (1997). "Development Study on ATREX Engine". Acta Astronautica 41 (12): 851–862. doi:10.1016/S0094-5765(97)00176-8.
- ESA Portal – ESA and ANU make space propulsion breakthrough
- Hall effect thrusters have been used on Soviet/Russian satellites for decades.
- A Xenon Resistojet Propulsion System for Microsatellites (Surrey Space Centre, University of Surrey, Guildford, Surrey)
- Alta - Space Propulsion, Systems and Services - Field Emission Electric Propulsion
- Google Translate
- Young Engineers' Satellite 2
- NASA GTX
- The PIT MkV pulsed inductive thruster
- Pratt & Whitney Rocketdyne Wins $2.2 Million Contract Option for Solar Thermal Propulsion Rocket Engine (Press release, June 25, 2008, Pratt & Whitney Rocketdyne)
- "Operation Plumbbob". July 2003. Retrieved 2006-07-31.
- Brownlee, Robert R. (June 2002). "Learning to Contain Underground Nuclear Explosions". Retrieved 2006-07-31.
- PSFC/JA-05-26:Physics and Technology of the Feasibility of Plasma Sails, Journal of Geophysical Research, September 2005
- NASA Beginner's Guide to Propulsion
- NASA Breakthrough Propulsion Physics project
- Rocket Propulsion
- Journal of Advanced Theoretical Propulsion
- Different Rockets
- Earth-to-Orbit Transportation Bibliography
- Spaceflight Propulsion - a detailed survey by Greg Goebel, in the public domain
- Rocket motors on howstuffworks.com
- Johns Hopkins University, Chemical Propulsion Information Analysis Center
- Tool for Liquid Rocket Engine Thermodynamic Analysis
- NASA Jet Propulsion Laboratory
- Smithsonian National Air and Space Museum's How Things Fly website | http://en.wikipedia.org/wiki/Spacecraft_propulsion | 13 |
82 | Soil Compaction Handbook
Copyright © Multiquip Inc
compaction is defined as the method of mechanically increasing the density
of soil. In construction, this is a significant part of the building
process. If performed improperly, settlement of the soil could
and result in unnecessary maintenance costs or structure failure.
Almost all types of building sites and construction projects utilize
mechanical compaction techniques.
is formed in place or deposited by various forces of nature - such as
glaciers, wind, lakes and rivers - residually or organically.
Following are important elements in soil compaction:
- Soil type
- Soil moisture content
- Compaction effort required
There are five principle
reasons to compact soil:
- Increases load-bearing capacity
- Prevents soil settlement and frost damage
- Provides stability
- Reduces water seepage, swelling and contraction
- Reduces settling of soil
Types of Compaction
There are four types of
compaction effort on soil or asphalt:
These different types of effort are found in the two principle types of
compaction force: static and vibratory.
Static force is simply the deadweight of the machine, applying
downward force on the soil surface, compressing the soil particles.
The only way to change the effective compaction force is by adding or
subtracting the weight of the machine. Static compaction is confined
to upper soil layers and is limited to any appreciable depth.
Kneading and pressure are two examples of static compaction.
Vibratory force uses a mechanism, usually engine-driven, to
create a downward force in addition to the machine's static weight.
The vibrating mechanism is usually a rotating eccentric weight or
piston/spring combination (in rammers). The compactors deliver a
rapid sequence of blows (impacts) to the surface, thereby affecting the
top layers as well as deeper layers. Vibration moves through the
material, setting particles in motion and moving them closer together for
the highest density possible. Based on the materials being
compacted, a certain amount of force must be used to overcome the cohesive
nature of particular particles.
Results of Poor
Both illustrations above show the
result of improper compaction and how proper compaction can ensure a
longer structural life.
Soil Types and
soil type behaves differently with respect to maximum density and optimum
moisture. Therefore, each soil type has its own unique requirements
and controls both in the field and for testing purposes. Soil types
are commonly classified by grain size, determined by passing the
soil through a series of sieves to screen or separate the different grain
sizes. Soil classification is categorized into 15
groups, a system set up by AASHTO (American Association of State Highway
and Transportation Officials). Soils found in nature are almost
always a combination of soil types. A well-graded soil
consists of a wide range of particle sizes with the smaller particles
filling voids between larger particles. The result is a dense
structure that lends itself well to compaction. A soil's makeup
determines the best compaction method to use.
The are three basic soil groups:
Organic (this soil is not suitable for compaction and will not be discussed here)
soils have the smallest particles. Clay has a particle size range of
.00004" to .002". Silt ranges from .0002" to
.003". Clay is used in embankment fills and retaining pond
Cohesive soils are dense and tightly bound
together by molecular attraction. They are plastic when wet and can
be molded, but become very hard when dry. Proper water content,
evenly distributed, is critical for proper compaction. Cohesive
soils usually require a force such as impact or pressure. Silt has a
noticeably lower cohesion than clay. However, silt is still heavily
reliant on water content.
soils range in particle size from .003" to .08" (sand) and
.08" to 1.0" (fine to medium gravel). Granular soils are
known for their water-draining properties.
Sand and gravel obtain maximum density in either
a fully dry or saturated state. Testing curves are relatively flat
so density can be obtained regardless of water content.
The tables that follow give a basic indication of
soils used in particular construction applications.
Desirability of Soils As Compacted Fill
to View Chart
|Guide to Soil Types
to look for
soils, fine sands and silts
can be seen. Feels gritty when rubbed between fingers
and soil are shaken in palm of hand, they mix. When shaking is
stopped they separate
or no plasticity
||Little or no
cohesive strength when dry. Soil sample will crumble easily.
soils, mixes and clays
be seen by naked eye. Feels smooth and greasy when rubbed between
and soil are shaken in palm of hand, they will not mix
sticky. Can be rolled
||Has high strength
when dry. Crumbles with difficulty. Slow saturation in
||Pavement Sub grade
Effect of Moisture
response of soil to moisture is very important, as the soil must carry the
load year-round. Rain, for example, may transform soil into a
plastic state or even into a liquid. In this state, soil has very
little or no load-bearing ability.
Moisture vs. Soil Density
Moisture content of the soil is vital to proper
compaction. Moisture acts as a lubricant within soil, sliding the
particles together. Too little moisture means inadequate compaction
- the particles cannot move past each other to achieve density. Too
much moisture leaves water-filled voids and subsequently weakens the
load-bearing ability. The highest density for most soils is at a
certain water content for a given compaction effort. The drier the
soil, the more resistant it is to compaction. In a water-saturated
state the voids between particles are partially filled with water,
creating an apparent cohesion that binds them together. This
cohesion increases as the particle size decreases (as in clay-type soils). l
Soil Density Tests
To determine if proper soil compaction is
achieved for any specific construction application, several methods were
developed. The most prominent by far is soil density.
Soil testing accomplishes the following:
- Measures density of soil for comparing the
degree of compaction vs. specs
- Measures the effect of moisture on soil density
- Provides a moisture density curve identifying optimum moisture
Tests to determine optimum moisture content are
done in the laboratory. The most common is the Proctor Test, or
Modified Proctor Test. A particular soil needs to have an ideal (or
optimum) amount of moisture to achieve maximum density. This is
important not only for durability, but will save money because less
compaction effort is needed to achieve the desired results.
The Hand Test
A quick method of determining
moisture is known as the "Hand Test". Pick
up a handful of soil. Squeeze it in your hand. Open
your hand. If the soil
is powdery and will not retain the shape made by
your hand, it is too dry. If it shatters when dropped, it is too
dry. If the soil is
moldable and breaks into only a couple of
pieces when dropped, it has the right amount of moisture for proper compaction. If
is plastic in your hand, leaves traces of moisture on your fingers and
stays in one piece when dropped, it has too much moisture for compaction.
Proctor Test (ASTM D1557-91)
The Proctor, or Modified Proctor Test, determines
the maximum density of a soil needed for a specific job site. The
test first determines the maximum density achievable for the materials and
uses this figure as a reference. Secondly, it tests the effects of
moisture on soil density. The soil reference value is expressed as a
percentage of density. These values are determined before any
compaction takes place to develop the compaction specifications.
Modified Proctor values are higher because they take into account higher
densities needed foe certain typed of construction projects. Test
methods are similar for both tests.
A small soil sample is taken from the jobsite. A standard weight is dropped several times on the soil. The material weighed
and then oven dried for 12 hours in order to evaluate water content
|Modified Proctor Test
This is similar to the Proctor Test except a hammer is used to
compact material for greater impact, The test is normally preferred
in testing materials for higher shearing strength.
It is important to know and control the soil
density during compaction. Following are common field tests to
determine on the spot if compaction densities are being
|Field Density Testing
Balloon Dens meter
* Large sample
* Large sample
* Direct reading obtained
* Open graded material
* Deep sample
* Under pipe haunches
* Easy to redo
* More tests (statistical reliability)
||* Many steps
* Large area required
* Halt Equipment
* Tempting to accept flukes
* Balloon breakage
|* Small Sample
* No gravel
* Sample not always retained
|* No sample
* Moisture suspect
* Encourages amateurs
||* Void under plate
* Sand bulking
* Sand compacted
* Soil pumping
|* Surface not level
* Soil pumping
* Void under plate
* Rocks in path
* Plastic soil
* Rocks in path
* Surface prep required
Cone Test (ASTM D1556-90)
A small hole (6" x 6" deep) is dug in
the compacted material to be tested. The soil is removed and
weighed, then dried and weighed again to determine its moisture
content. A soil's moisture is figured as a percentage. The
specific volume of the hole is determined by filling it with calibrated
dry sand from a jar and cone device. The dry weight of the soil
removed is divided by the volume of sand needed to fill the hole.
This gives us the density of the compacted soil in lbs per cubic
foot. This density is compared to the maximum Proctor density
obtained earlier, which gives us the relative density of the soil that was
Nuclear Density (ASTM D2292-91)
Nuclear Density meters are a quick and fairly
accurate way of determining density and moisture content. The meter
uses a radioactive isotope source (Cesium 137) at the soil surface
(backscatter) or from a probe placed into the soil (direct
transmission). The isotope source gives off photons (usually Gamma
rays) which radiate back to the mater's detectors on the bottom of the
unit. Dense soil absorbs more radiation than loose soil and the
readings reflect overall density. Water content (ASTM D3017) can
also be read, all within a few minutes. A relative Proctor density
with the compaction results from the test.
Sorry, we do not sell Nuclear Density Meters
Modulus (soil stiffness)
This field-test method is a very recent
development that replaces soil density testing. Soil stiffness is
the ratio of force-to-displacement. Testing is done by a machine
that sends vibrations into the soil and then measures the deflection of
the soil from the vibrations. This is a very fast, safe method of
testing soil stiffness. Soil stiffness is the desired engineering
property, not just dry density and water content. This method is
currently being researched and tested by the Federal Highway
The desired level of compaction is best achieved
by matching the soil type with its proper compaction method. Other
factors must be considered as well, such as compaction specs and job site
- Cohesive soils - clay is
particles stick together.*
Therefore, a machine with a high
impact force is required to ram the soil and force the air out,
arranging the particles. A rammer is the best choice, or
vibratory roller if higher production is needed.
*The particles must be sheared to compact.
- Granular soils - since granular soils are not cohesive and the particles
require a shaking or vibratory action to move them, vibratory plates
(forward travel) are the best choice.
Reversible plates and smooth
drum vibratory rollers are appropriate for production work.
Granular soil particles respond to different frequencies (vibrations)
depending on particle size. The smaller the particle, the higher the
frequencies and higher compaction forces.
Normally, soils are mixtures
of clay and granular materials, making the selection of compaction
equipment more difficult. It is a good idea to choose the machine
appropriate for the larger percentage of the mixture. Equipment
testing may be required to match the best machine to the job.
Asphalt is considered granular due to its base of mixed aggregate sizes
(crushed stone, gravel, sand and fines) mixed with bitumen binder (asphalt
cement). Consequently, asphalt must be compacted with pressure
(static) or vibration.
Compaction Machine Characteristics
factors are important in determining the type of force a compaction
machine produces: frequency and amplitude.
Frequency is the speed at which an
eccentric shaft rotates or the machine jumps. Each compaction
frequency machine is designed to operate at an optimum frequency to supply
the maximum force. Frequency is usually given in terms of vibration
per minute (vpm).
Amplitude (or normal amplitude) is the
maximum movement of a vibrating body from its axis in one direction.
Double amplitude is the maximum movement of a vibrating body from its axis
in one direction. Double amplitude ins the maximum distance a
vibrating body moves in both directions from its axis. The apparent
amplitude varies for each machine under different job site
conditions. The apparent amplitude increases as the material becomes
more dense and compacted.
height and Machine Performance
Soil can also be
over-compacted if the compactor makes too many passes (a pass is the
machine going across a lift in one direction). Over-compaction is
like constantly hitting concrete with a sledgehammer. Cracks will
eventually appear, reducing density. This is a waste of man-hours
and adds unnecessary wear to the machine.
height (depth of the soil layer is an important factor that effectsmachine performance and compaction cost. Vibratory and rammer-type
equipment compact soil in the same direction: from top to bottom and
bottom to top. As the machine hits the soil, the impact travels to
the hard surface below and then returns upward. This sets all
particles in motion and compaction takes place.
As the soil becomes compacted, the impact has a
shorter distance to travel. More force returns to the machine,
making it lift off the ground higher in its stroke cycle. If the
lift is too deep, the machine will take longer to compact the soil and a
layer within the lift will not be compacted.
A word about meeting job site
specifications. Generally, compaction performance parameters are
given on a construction project in one of two ways:
- Method Specification - detailed instructions
specify machine type, lift depths, number of passes, machine speed and
moisture content. A "recipe" is given as part of the job
spec to accomplish the compaction needed. This method is outdated,
as machine technology has far outpaced common method specification
- End-result Specification - engineers indicate
final compaction requirements, thus giving the contractor much more
flexibility in determining the best, most economical method of meeting the
required specs. Fortunately, this is the trend, allowing the
contractor to take advantage of the latest technology available.
Rammers deliver a high impact force ( high
amplitude) making them an excellent choice for cohesive and semi-cohesive
soils. Frequency range is 500 to 750 blows per minute. Rammers
get compaction force from a small gasoline or diesel engine powering a
large piston set with two sets of springs. The rammer is inclined at
a forward angle to allow forward travel as the machine jumps.
Rammers cover three types of compaction: impact, vibration and kneading.
Vibratory plates are low amplitude and high
frequency, designed to compact granular soils and asphalt. Gasoline
or diesel engines drive one or two eccentric weights at a high speed to
develop compaction force. The resulting vibrations cause forward
motion. The engine and handle are vibration-isolated from the
vibrating plate. The heavier the plate, the more compaction force it
generates. Frequency range is usually 2500 vpm to 6000 vpm.
Plates used for asphalt have a water tank and sprinkler system to prevent
asphalt from sticking to the bottom of the base plate. Vibration is
the one principal compaction effect.
Reversible Vibratory Plates
In addition to some of the standard vibratory
plate features, reversible plates have two eccentric weights that allow
smooth transition for forward or reverse travel, plus increased compaction
force as the result of dual weights. Due to their weight and force,
reversible plates are ideal for semi-cohesive soils.
A reversible is possible the best compaction buy
dollar for dollar. Unlike standard plates, the reversible forward
travel may be stopped and the machine will maintain its force for
Rollers are available in several categories:
walk-behind and ride-on, which are available as smooth drum, padded drum,
and rubber-tired models; and are further divided into static and vibratory
A popular design for many years, smooth-drum
machines are ideal for both soil and asphalt. Dual steel drums are
mounted on a rigid frame and powered by gasoline or diesel engines.
Steering is done by manually the machine handle. Frequency is
around 4000 vpm and amplitudes range from .018 to .020. Vibration is
provided by eccentric shafts placed in the drums or mounted on the frame.
Padded rollers are also known as trench rollers
due to their effective use in trenches and excavations. These
machines feature hydraulic or hydrostatic steering and operation.
Powered by diesel engines, trench rollers are built to withstand the
rigors of confined compaction. Trench rollers are either skid-steer
or equipped with articulated steering. Operation can be by manual or
remote control. Large eccentric units provide high impact force and
high amplitude (for rollers) that are appropriate for cohesive
soils. The drum pads provide a kneading action on soil. Use
these machines for high productivity.
Configured as static-wheel rollers, ride-ons are
used primarily for asphalt surface sealing and finishing work in the
larger (8 to 15 ton) range. Small ride-on units are used for patch
jobs with thin lifts. The trend is toward vibratory rollers.
Tandem vibratory rollers are usually found with drum widths of 30" up
to 110", with the most common being 48". Suitable for
soil, sub-base and asphalt compaction, tandem rollers use the dynamic
force of eccentric vibrator assemblies for high production work.
single-drum machines feature a single vibrating drum with pneumatic drive
wheels. The drum is available as smooth for sub-base or rock fill,
or padded for soil compaction. Additionally, a ride-on version of
the pad foot trench roller is available for very high productivity in
confined areas, with either manual or remote control operation.
These rollers are equipped with 7 to 11 pneumatic
tires with the front and rear tires overlapping. A static roller by
nature, compaction force is altered by the addition or removal of weight
added as ballast in the form of water or sand. Weight ranges vary
from 10 to 35 tons. The compaction effort is pressure and kneading,
primarily with asphalt finish rolling. Tire pressures on some
machines can be decreased while rolling to adjust ground contact pressure
for different job conditions.
Safety and General
all construction equipment, there are many safety practices that should be
followed while using compaction equipment. While this instructional
guide is not designed to cover all aspects of job site safety, we wish to
mention some of the more obvious items in regard to compaction
equipment. Ideally, equipment operators should familiarize
themselves with all of their company's safety regulations, as well as any
OSHA, state agency or local agency regulations pertaining to job
safety. Basic personal protection, consisting of durable work
gloves, eye protection, ear protection, approved hard hat and work
clothes, should be standard issue on any job available for immediate use.
In the case of walk-behind compaction equipment,
additional toe protection devices should be available, depending on
applicable regulations. All personnel operating powered compaction
equipment should read all operating and safety instructions for each piece
of equipment. Additionally, training should be provided so that the
operator is aware of all aspects of operation.
No minors should be allowed to operate
construction equipment. No operator should run construction
equipment when under the influence of medication, illegal drugs or
alcohol. Serious injury or death could occur as a result of improper
use or neglect of safety practices and attitudes. This applies to
both the new worker as well as the seasoned professional.
Trench work brings a new set of
safety practices and regulations for the compaction equipment
operator. This section does not intend to cover the regulations
pertaining to trench safety (OSHA Part 1926, Subpart P). The
operator should have knowledge of what is required before
compacting in a trench or confined area. Be certain a
"competent person" (as defined by OSHA Part 1926.650 revised
July 1, 1998) has inspected the trench and follows OSHA guidelines for
inspection during the duration of the job. Besides the obvious
danger of a trench cave-in, the worker must also be protected from falling
objects. Unshored (or shored) trenches can be compacted with the use
of remote control compaction equipment. This allows to operator to
stay outside the trench while operating the equipment. | http://www.concrete-catalog.com/soil_compaction.html | 13 |
251 | THE PRINCIPLE OF CIRCLON SYNCHRONICITY
The fundamental assumption of Circlon Synchronicity is that protons and electrons exist in the universe and they are exactly what we measure them to be.
Circlon Synchronicity is a conceptual model of mass, space, time and gravity that is based on complimentary principles of measurement that describe the interactions of two fundamental mechanical particles of matter moving with three dimensions of momentum within three dimensions of time. Circlon Synchronicity is organized and explained within nine basic principles of physical measurement.
Fundamental Particles of Matter
Measurements show electrons and protons are the only absolutely stable and eternal primary particles of matter and that they both have a circlon shape. Photons and neutrinos are the only stable secondary fundamental particles of matter. The structures of these particles were once part of the structures of electrons and protons.
A photon is a matter/antimatter pair made from equal pieces of an electron and proton. A photon can exist as a linear mass particle moving at the speed of light or as a circlon shaped mass particle forming a mechanical link between a proton and electron within an atom.
A neutrino/antineutrino pair is broken off when an electron is captured by a proton and a hydrogen atom is transformed into a neutron. The neutrino is emitted out into the void but the antineutrino stays within the neutron as a “bolt” to hold the electron within the proton. A neutrino is a piece of a proton and an antineutrino is a piece of an electron. In a neutron decay, the electron and proton split apart when the antineutrino holding then together is emitted into the void.
All four of these basic stable particles of matter are complex coil structures composed of cosmic string that has mass and is wound into circlon shapes. The whole particle zoo, both stable and unstable, from the neutron to the uranium atom, are all composites of two or more of these four fundamental circlon shaped particles and can, in principle, be broken down into their individual components.
Three Dimensions of Momentum
Momentum is the absolute and conserved one dimensional quantity of motion for a single body of mass.
Angular momentum is the relative and conserved two dimensional rotational motion within the structure of matter and photons and between any two external bodies of mass.
Gravitational momentum is the absolute and conserved three dimensional upward motion of the earth’s surface and all other bodies of mass.
The Three Dimensions of Time
Clocks measure time with the three distinct and separate time flows of linear photon inertial time, rotational inertial time and gravitational time.
Nine Basic Principles of Physical Measurement
The evolution of the Living Universe is defined by interactions between four particles of matter with three dimensions of motion and three dimensions of time. This living transformation evolved in terms of the following nine principles of non-field experimental measurement.
1. The Principle of Absolute Photon Rest
All photons are measured to move at exactly C relative to the same absolute inertial reference frame of photon rest.
All photons move at C relative to photon rest and at (C+/- V) relative to moving bodies. These relative motions between photons and moving bodies always produce Doppler shifts in the photons.
All other particles and bodies of matter have an exact velocity relative to the photon rest frame that is less than C. All bodies have only rest mass when located at photon rest.
The kinetic energy (E=MV2/2) of a body’s motion relative to photon rest gives the body a quantity of kinetic mass (M = E/C2). All electrons or protons are measured to have identical masses because they were all created at photon rest and the mass increases caused by their subsequent motions are equal for all particles in a given inertial reference frame. All moving protons or electrons have a different kinetic mass that changes with each change in motion but they all have identical masses when brought together in the same frame.
When a photon is emitted from a moving atom, it is emitted at photon rest. The photon is red or blue shifted by the motion of the atom but maintains its exact velocity of C relative to photon rest. The values of mass and wavelength for the photon remain unchanged as it travels through space but it will always be measured to have different values in any moving frame. A photon’s measured energy is increased or decreased by the relative kinetic energy of a moving observer.
Measurements of Doppler shifts within the vast numbers of the 2.7° Cosmic Blackbody Radiation photons show that our solar system is moving at about 375 km/sec relative to the common position of rest that is shared by all matter and all photons.
2. The Principle of Gravitational Expansion
Gravity is an effect caused by the gradual and constant expansion of matter.
Measurements show that the increasing dimensions of large bodies of matter produce a three dimensional outward acceleration at their surfaces. This measurement is actually a deceleration that slows the surface of the body to a constant three dimensional upward surface velocity. The constant for gravitational force is measured to be the outward velocity of 9.2116 x 10-14 m/s at the Bohr Radius of the hydrogen atom. At the surface of the Earth this constant upward velocity is 11,189 km/sec. All of gravity’s phenomena can be explained in terms of these measurements.
Just as the direction of linear force is one dimensional and the direction of centripetal force is two dimensional, the true direction of gravitational force is three dimensional. Although the idea of gravitational force has been around for hundreds of years, until now, no one has ever allowed themselves to consider the true direction of this force. Gravity is a three directional force of matter rather than a one dimensional force between matter.
When we measure the force of gravity with an accelerometer, we find that the direction of gravitational force is not down, as is commonly assumed, but rather up because the direction of the measurable acceleration is up. Even though a falling body may appear to the casual observer to accelerate downward, there is no way that this conclusion can be verified with an accelerometer. All that we can actually measure is the surface of the Earth accelerating upward toward a stationary “falling” body. The true mechanism of gravity isn’t very difficult to figure out. When we just take simple measurements of gravity we find that the cause of gravity is not an “attractive force” or a “curvature of space”. The true precisely measured cause of the Earth’s gravity is the upward acceleration of the Earth’s surface. The only possible cause for this upward motion of the Earth’s surface is a constant expansion of the matter within the Earth’s interior. This constant upward acceleration can be translated into a constant upward surface velocity that is equal to escape velocity.
The actual mechanics of gravitational expansion are basically the mirror image of the mechanics of Einstein’s General Relativity. In both General Relativity and the principle of gravitational expansion, gravity is the result of changes in the geometry of mass, space and time. In General Relativity gravity is caused by the curvature of space and time and in the Living Universe gravity is caused by the curvature of mass and time.
To put a body in orbit from the surface of Earth, it is necessary to use two forces at right angles to one another. These two forces are quite distinct from one another in that one causes the body to accelerate and the other force decelerates the body at right angles to the other force. The upward force actually decelerates the body to a lower surface velocity on a path away from Earth’s center and the sideways force accelerates the body on a path parallel to Earth’s surface. When the velocities produced by these forces are balanced, the body will travel on a circular path around Earth. A circular orbit is a perfect balance between three velocities. The escape velocity at a point of orbit, the orbital velocity and the upward surface velocity at Earth’s surface. The interaction of these three velocities keeps the body in a circular orbit around the Earth without the need of a “force” to maintain it in the proper position. There are no forces exchanged between the earth and an orbiting body. The combination the orbiting body’s two velocities balance the upward velocity of Earth’s surface to keep the body in a circular orbit. The orbiting body is simply moving away from the Earth at the same rate that Earth is expanding toward it.
3. The Principle of the Circlon Shape
The mechanical structure of atoms is based on the circlon shape of the proton and electron.
The circlon is a precise and very complex triple torus shape that can for most purposes be pictured in the mind as a hollow donut. Measurements of protons, electrons and hydrogen atoms show that their parameters of mass, energy, wavelength, Bohr radius, fine structure, and radiation spectra require mechanical particles with a circlon shape. Atoms are formed when electrons and protons are held together by the dynamic motions of their circlon shapes. Measurements of the 282 stable atomic nuclear isotopes demonstrate that all of these nuclear structures can be physically assembled from the circlon shapes of the protons, neutrons and mesons.
4. The Principle of Photon Mass
All experimental measurements of photons can be used to determine that they have both a dimensional shape and a mass.
A photon’s kinetic energy is E = MC2/2 + I ω2/2 = MC2. Its momentum is p = MC. Its wavelength is λ = 2ϖlω/MC. Its angular momentum that is the same value for all photons is Iω = MλC/2ϖ.
A photon with mass means that there is never a transformation between mass and energy and that both the mass and the energy in the universe are eternal and remain constant and absolutely conserved. Mass and energy are complimentary and inseparable components of matter and photons that coexist together like the two sides of a coin.
Kinetic energy E=MV2/2 is merely a measure of the relative motion between two bodies of mass and does not ever exist separate from bodies of mass.
Photon energy E=MC2 is the measure of the absolute motions of the photon relative to two dimensions of absolute space.
The photon is not an “energy particle”. It is a particle of matter with mass and energy. Its energy is composed of two different kinds of kinetic energy. The relative energy of the linear motion of its mass at c and the absolute rotational kinetic energy of the spin of its mass at C.
5. The Principle of Matter and Photon Continuity
The proton and electron are the eternal particles of matter that can unite to produce eternal photons.
The primary intrinsic difference between the Living Universe and the Standard Model of Physics is in the concept of continuity between matter and photons. In the standard model, matter and photons come and go into and out of existence in a continual dance where they transform back and forth into one another. Energy becomes matter and matter becomes energy.
In the Living Universe, matter and energy are eternal and unchanging. Every proton and electron is an equal part of the initial matter-antimatter pair that began the Living Universe. Also, the mass of every photon has its origin with this beginning particle. Both matter and photons are eternal. When a photon travels unchanged across the universe for billions of years, it does not lose its ultimate identity when it is finally absorbed by an atom and becomes a circlon. A circlon is just a stationary photon. It remains intact as the physical link between a proton and electron until it is re-emitted. It may have lost or picked up a little mass from the atom, but it is basically the same photon ready to travel through space for another billion years. In the Living Universe, mass and energy are constant and there is never any transformation between them. An atom may gain or lose mass by the absorption or emission of a photon but total mass of the system remains constant both before and after these interactions.
6. The Principle of 3 Dimensional Time Measurement
Measurements with atomic clocks show that there are three distinct dimensions to the flow of time.
Photon time is one dimensional time based on the speed of light. Light years and pulsar bursts are examples of units of photon time.
Rotational Inertial time is time based the two dimensional rotation of mass and is based on the conservation of angular momentum. The the rotation of the earth. the vibration of the Cesium-133 atom and the ticking of a Harrison chronometer are units of inertial time.
Gravitational time is the very slow absolute time based on the 3 dimensional upward flow of gravitational motion. The ticking of a grandfather clock and the yearly circling of the sun are unit of gravitational time.
High speed measurements demonstrating the Lorentz Transformation shows that a body’s mass is increased when it is accelerated and then decreased when it is decelerated relative to the position of photon rest. Careful measurements with atomic clocks can monitor these changes in the absolute values for mass due to either a body’s inertial motion or its gravitational motion. Clock rates must change to conserve angular momentum as a clock’s mass is changed by changes in its absolute momentum.
At a position of rest, all three of the different clocks measuring the three dimensions of time run at the same rate. Their rates diverge when accelerated to high velocity with gravity clocks running faster and inertial clocks running slower. The interchange between these complimentary time flows can be demonstrated by the variations in very accurate clocks put into different orbits. Inertial time clocks are slowed by orbital velocity and are also slowed by gravitational surface velocity. In the low space station orbit, inertial clocks run slower than they do on earth and in the much higher GPS orbit they run faster than sea level clocks.
Gravity pendulum clocks run slower on Mount Everest than they do at the Dead Sea but Inertial clocks run faster on Mount Everest an slower at the Dead Sea.
7. The Principle of Mechanical Interactions
There is only one physical interaction in nature and it is the common sense event of one body mechanically touching another. Force equals Mass times Acceleration.
In the Living Universe, there is no “action at a distance”. There are only three measurable dimensions. There is no aether and there are none of the specialized forms of aether called fields. There is no space-time either curved or flat. There are no impinging gravitons or any other kinds of virtual particles. There are no non-material wave interactions. All quantum waves are purely harmonic motions within the physical structure and shape of matter. All interactions between atoms and photons are purely mechanical.
The long accepted five “field interactions” of physics are all bogus, because they are all really just mechanical in nature. There is no “unified field” solution to physics. There is only a mechanical non-field solution.
The strong interaction is a kind of “nuts and bolts” phenomenon that mechanically holds protons and neutrons together.
The weak force is very similar in nature except on a higher level of scale. The weak force is the way that electrons and neutrinos are mechanically held inside of protons to form neutrons.
The electromagnetic force results from the physical touching and the pushing and pulling between the expanding external circlon shaped charge coils of protons and electrons.
Gravity is simply an effect caused by slowly expanding matter. The surface of Earth expands upward and hits stationary “falling” bodies. In the Living Universe, the earth falls up!
The so called Dark Energy interaction is not an outward acceleration. It was simply proposed by cosmologists who didn’t understand that the decreasing mass of the electron caused distant supernova explosions to be less energetic than the supernova explosions of today.
8. The Principle of Electron/Proton Mass Ratio Transformation
The evolution of matter is driven by the gradual decrease in the mass of the electron over cosmological time.
Throughout the history of the Living Universe, the mass of the electron, relative to the proton, has been gradually decreasing, while its size (Compton wavelength and classical electron radius) has been increasing at an inversely proportionate rate. This changing relationship between electron and proton gradually changes the properties and dynamics of the hydrogen atom in particular and of all the other elements in general. These changes in the electron/proton mass and size ratios also cause changes in the “constants” of nature dependent on these ratios such as the Fine Structure constant (a) and the Bohr radius (ao). As the electron’s mass decreases, the ionization energy of hydrogen increases and the photon spectra of the other elements are shifted to shorter wavelengths.
The Bohr radius decreases and the fine structure increases as the electron’s mass decreases. The Bohr radius is basically the distance between the proton and electron in the hydrogen atom and the fine structure “constant” is the internal ratio between the circlon shaped coil structures of the hydrogen atom. It is these two parameters that determine the physical size and shape of atoms as well as the wavelengths of the photons that they emit.
Gravitational time is itself a duality between the slightly different gravitational motion rates between electrons and protons. Measurements deep into the cosmos show that photons emitted by atoms in the distant past have much longer wavelengths that the photons emitted by those same atoms today. The cause of this phenomenon is that the electron has been gradually growing in size and losing mass over cosmological time. In the past, these more massive electrons caused atoms to emit their characteristic spectrum of photons with longer wavelengths. The complete evolution of the Living Universe can be demonstrated by extrapolating the changing mass of the electron back to that point in the past to where the masses of the proton and electron were equal. Using the circlon shape as a template, it is possible to calculate the exact point in the Living Universe when the 2.7° CBR was formed as a fundamental constant of matter.
9. The Principle of Matter/Antimatter Charge Conservation
The universe contains equal numbers of particles of matter and antimatter with equal numbers of positive and negative charges.
Experiments show that particles of positively charged matter and negatively charged particles of antimatter are always either produced or destroyed together. This means that the number of positive charges in the universe is exactly equal to the number of negative charges. It then follows that the protons were once the antiparticle to the electrons. When particles and antiparticles annihilate with one another they transform into a pair of photons that are each composed of one half particle and one/half antiparticle. The photon’s two mass components can then divide into a matter-antimatter pair with opposite charges.
The Living Universe is made up exclusively of particle/antiparticle pairs. The universe always contained at least one particle/antiparticle pair. The universe of today is the result of first the reproduction and then the continuing evolution of that original pair. For every particle with a positive charge there is an antiparticle with a negative charge. In high energy events, particles can be created or destroyed but only in conjunction with their antiparticles. This process causes the number of protons and electrons in the universe to slightly fluctuate back and forth from their exact creation number of (2257). In the same way, the numbers of positive and negative charges in the universe stays very close to (2257) and are always in equal numbers.
Today, in the Living Universe, the particles are the protons and their antiparticles are the electrons. Neutrons and hydrogen atoms are both examples of particle-antiparticle pairs. Both are made up of different configurations of a proton and an electron. These two bodies are the basic building blocks of all the elements’ stable and unstable isotopes.
The Measurable Parameters of Space, Time, Mass and Energy
The measured quantities for mass, space, time, and energy, within the universe remain separate and constant over the passage of time.
Space is the negative reality. If space really did have a positive existence, what would we expect to find in its absence?
Space is infinite and does not bend. It is not a “zero point quantum vacuum” with its virtual particles winking in and out. To give it a dimension you could say that it is nothing cubed. The only tangible property of space is its infinity. The very most that you can say about space is that it is an idea that is impossible to imagine because of its infinity. The concept of three dimensional space can be very useful in measuring and calculating, but in reality, space can only be perceived as an infinite number of one dimensional momentum vectors. Local two dimensional space can be perceived as angular momentum. The only three dimensional perception of space is the gravitational expansion of matter.
Times are three different ideas for measuring momenta.
While space is a tangible void, time has no reality outside of a consciousness that is actively perceiving inertial motion. Time is simply the idea used to quantify the constant relationship between Mass and Space called motion.
A body in motion is carried along by its own momentum. There is no substance or field called “time” that pushes it along. It is momentum, and not “time”, that takes us from the past to the future. Clocks do not measure “time”. They monitor the conserved relationship (T = MS/p) between mass and space called momentum (p = MS/T). We always use momentum to measure time and there can be no time without momentum. To record intervals of “time”, a clock uses one of three types of momentum. A clock measures either the momentum of photons, angular momentum or gravitational momentum. To measure time accurately, a clock must only monitor one type of momentum. Time does not exist as a single physical entity. The idea of time is used to quantify momentum and with three types of momentum there three different and separate types of time flows.
Mass is the positive reality. We need no theory for mass. It is just what we measure.
Mass is the primary component of reality. It can only be quantified in terms of the negative reality of space and the virtual reality of time. The fundamental measure of mass in the universe is the photon.
Whereas space has no properties, mass is the only property. Mass is defined as resistance to change motion and it is the measurement of that resistance through force that defines space and time. Mass is a property of matter. Matter is mass with a shape. Too little can not be said about mass. Mass just is. It is the only metaphysical assumption that need be made to explain physics. We need not assume a physical existence for space or time but we must assume the physical existence of mass. The total mass of the universe is the same today as it has always been.
Momentum is the motion of mass and is conserved in all interactions.
Momentum (p) is mass times space divided by time (p = MS/T). Any time a force (F = MS/T2) changes a body’s momentum an equal and opposite quantity of momentum is also changed. Momentum is conserved. Whenever a force is applied to change a body’s momentum, an equal quantity of momentum is applied to the force. A bullet’s momentum is equal to the momentum of the rifle’s recoil. The individual equal and opposite momenta of two lumps of clay moving toward one another cancel each other out when they collide and remain stuck together at rest with zero linear momentum. That momentum has been converted to the angular momentum of the heat generated by the collision.
All bodies in the universe have an exact and absolute quantity of momentum relative to photon rest. . When we measure a body’s momentum, it is relative to our surroundings but each body has a hidden absolute momentum that is measured relative to the photon rest. The energy inherent in this motion has mass but there is no way to measure the mass or energy of a moving body except by stopping it. We can’t stop it, because when we change its motion we can’t tell whether we are slowing it down or speeding it up.
Momentum is not to be confused with energy. Energy is an absolute component of momentum but momentum and energy are not proportional. A bullet and a recoiling rifle each have the same momentum but the bullet has far more energy that the rifle. The equal and opposite momenta of the colliding lumps of clay disappears to zero but the quantity of energy inherent in those momenta still exists as heat within the new lump of clay.
Energy is the substance of momentum. The energy inherent in a body’s momentum can be measured as mass.
Energy is the quantitative relationship between Mass, Space and Time. It is a property of Mass that can be defined as motion and that can always be broken down into E=MV2/2. Whereas a body’s momentum is linear and absolute in space and time, a body’s energy is absolute and non linear. A body’s energy is contained in both its linear and rotational motions. The energy of the universe is conserved as it is transformed from one form to another. The kinetic energy of the two lumps of clay colliding together remains constant in the heat and sound energy generated by the collision.
There is no transformation between mass and energy as is implied by the formula E =MC2. In the universe, the total individual quantities for mass and energy are constant. Energy has mass according to M=E/C2 but energy cannot be converted to mass because energy is mass. A pot of water heated on an electric stove gains mass that it takes away from the electricity. When a positron annihilates with an electron, the two photons produced each have masses equal to the electron or positron. In each of these photons, their mass and energy are equal according to E =MC2/2. The photon has two distinct types of energy. The kinetic energy of its motion at C along its vector and the rotational kinetic energy of its two mass particles spinning a C. Photons have a mass of M= E/C2 and an energy of E=MC2/2 + ω2/2 =MC2.
Matter is produced by the dividing of photons. A photon is a perfect union between a positive matter body and an exactly equal and opposite negative antimatter body. For example, when a positron and an electron annihilate, they combine together and then split into two x-ray photons. Each photon contains one-half of the electron’s negative matter and one-half of the positron’s positive matter, as well as, half of each particle’s mass. These two oppositely spinning, rope like, particles join together within the photon and move forward at C with an undulating wavelike motion that gives the photon it characteristic angular momentum. This is the source of the wave particle duality. The photon is a mass particle that moves through space with a wave-like motion.
When a gamma ray photon splits into a positron and an electron, an opposite process takes place. The negative matter particle and positive matter particle within the photon separate from one another to become equal and opposite positive and negative particles (positron and electron). In these transformations between matter and photons and back again, there is no change in mass. The photons from a electron-positron annihilation have exactly the same mass as the as the original particles.
When a body is accelerated relative to photon rest, its mass is increased with its increasing kinetic energy. This kinetic mass (kM) increases exponentially as the velocity approaches the speed of light according to kM = M/√1-V2/C2.
There is no transformation between mass, energy and photons. Photons have mass and it takes mass to make photons. Energy has mass and the mass of a photon’s energy is the photon’s mass.
The Nature of Force
Absolute Force equals mass times acceleration or deceleration or any combination of the two.
Force is the basic parameter of physical interaction. Newton first defined force as mass times acceleration (F = MA). Force always causes acceleration and acceleration is always the result of force. Any time the word “force” is used in a situation where no acceleration can be measured it is not a real force but rather a “force-like” event.
For Example, it is not possible to have two equal and opposite “forces” because no acceleration can be produced. Two equal and opposite forces would cancel each other. What you have is not a force but a tension. Tension is not force because it does not produce acceleration. A tension does not require energy and its measurement is not quantifiable as force. For example, the tension on a compressed spring is not a force because there is no acceleration to measure and because the static tension on the spring tells us nothing about the quantity of force that was required to compress it, or the force that is measured when a particular spring is released. Consider two coil springs that are identical except that one is twice as long as the other. Now, if we compress both of these springs to the same tension we cannot equate these two equal tensions as equal forces because, when released, the long spring will produce twice as much force (and energy) as the short spring. Therefore, a stationary compressed spring does not exert a force because it doesn’t produce acceleration or require energy.
No Centrifugal Force
Centrifugal force is a force in name only and cannot be regarded as a real force because it is, by definition, the reciprocal of centripetal force. What is called centrifugal “force” is actually the inward acceleration produced by centripetal force. Centripetal force is a real force because it produces an inward acceleration of mass (mass x acceleration) that is the definitive measurement of a force. Centrifugal force is not a force. It is a mass times acceleration. It is a mass being accelerated by a force. There is no measurement that can be made of centrifugal force because there is no outward acceleration to measure. The only thing that can be measured is the inward acceleration produced by centripetal force. Force can only be measured with an accelerometer and an accelerometer only registers force. In the case of the accelerometer readings produced by the constant change in direction of a rotating body, half of the reading is acceleration and the other half is deceleration. A rotating body is constantly speeding up and slowing down at the same rate and thus maintains a constant rotational velocity.
The Absolute Direction of Force
We can always measure the true direction and magnitude of a force. What we can’t do is to separate the mixture of absolute acceleration and deceleration produced by every force.
The distinction between force and acceleration with centripetal force and centrifugal acceleration is absolute. The direction of centripetal force is always towards the center of gravity of the rotating system. In contrast to linear motion, rotational motion is always absolute because it is two dimensional.
However when we measure rectilinear force it is best to consider the forces as relative and not consider the distinction between the true direction of acceleration or deceleration relative to absolute rest.
In fact, the proper definition of force is; Force equals mass times acceleration or deceleration or any combination of the two. In practice, with linear change in motion, it is not possible to make an absolute distinction between acceleration and deceleration even though relative to photon rest a true value must exist. Virtually every measurement of acceleration has components of both acceleration and deceleration that are impossible to separate without looking to the universe at large.
For example, it is easiest to think that a railroad locomotive exerts a force that accelerates a train the same amount in either a westerly or easterly direction. However, in the real world of absolute motion there is an absolute distinction between acceleration and deceleration. In the case of the locomotive accelerating the train toward the east, the direction of the force is to the east and the train’s absolute motion is increased. However, when the train goes west, the direction of the force is west but in this interaction the force of the locomotive is actually decelerating the train to a lower velocity relative to the earth’s rotation.
In this case with the train, we are certainly justified to consider the change in motion to be relative to a stationary Earth. However, any absolute changes in motion can only be measured relative to the photons of the 2.7° CBR.
In any interaction between a force and an accelerating or decelerating mass their is always an absolute and true direction of force that is the direction that energy needs to accelerate or decelerate the mass. However, in the real world of measurement the observer is usually justified in using an accelerometer reading as the direction of force and not making any distinction between acceleration and deceleration. There is absolutely no way to locally measure the difference between acceleration and deceleration. Observers can choose the reference frame that best serves their purposes.
The Living Universe Book
A New Theory for the Creation of Matter in the Universe
In the Living Universe, the properties of matter slowly evolve with a transformation in the mass and size of the electron. Matter was created not out of the chaos of an explosion of space and time but rather from the perfect and orderly reproductive processes of ordinary matter in the form of electrons and protons. This book is available for sale. | http://www.circlon.com/home/3-Principle-of-Circlon-Synchronicity.html | 13 |
74 | Monomials, binomials and trinomials are special cases of polynomials with one, two and three terms respectively.
The polynomial can be written in sigma notation as:
- degree 0 are called constant functions,
- degree 1 are called linear functions,
- degree 2 are called quadratic functions,
- degree 3 are called cubic functions,
- degree 4 are called quartic functions and
- degree 5 are called quintic functions.
Note that the polynomials of degree ≤ n are precisely those functions whose (n+1)st derivative is identically zero.
One important aspect of calculus is the project of analyzing complicated functions by means of approximating them with polynomials. The culmination of these efforts is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial, and the Weierstrass approximation theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial.
Quotients of polynomials are called rational functions. Piecewise rationals are the only functions that can be evaluated directly on a computer, since typically only the operations of addition, multiplication, division and comparison are implemented in hardware. All the other functions that computers need to evaluate, such as trigonometric functions, logarithms and exponential functions, must then be approximated in software by suitable piecewise rational functions.
In order to determine function values of polynomials for given values of the variable x, one does not apply the polynomial as a formula directly, but uses the much more efficient Horner scheme instead. If the evaluation of a polynomial at many equidistant points is required, Newton's difference method reduces the amount of work dramatically. The Difference Engine of Charles Babbage was designed to create large tables of values of logarithms and trigonometric functions automatically by evaluating approximating polynomials at many points using Newton's difference method.
A root or zero of the polynomial f(x) is a number r such that f(r) = 0. Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. Some polynomials, such as f(x) = x2 + 1, do not have any roots among the real numbers. If however the set of allowed candidates is expanded to the complex numbers, every (non-constant) polynomial has a root (see Fundamental Theorem of Algebra).
Approximations for the real roots of a given polynomial can be found using Newton's method, or more efficiently using Laguerre's method which employs complex arithmetic and can locate all complex roots. These algorithms are studies in numerical analysis.
There is a difference between approximating roots and finding concrete closed formulas for them. Formulas for the roots of polynomials of degree up to 4 have been known since the sixteenth century (see quadratic formula, Cardano, Tartaglia). But formulas for degree 5 eluded researchers for a long time. In 1824, Abel proved the striking result that there can be no general formula (involving only the arithmetical operations and radicals) for the roots of a polynomial of degree ≥ 5 in terms of its coefficients (see Abel-Ruffini theorem). This result marked the start of Galois theory which engages in a detailed study of relations among roots of polynomials.
In multivariate calculus, polynomials in several variables play an important role. These are the simplest multivariate functions and can be defined using addition and multiplication alone. An example of a polynomial in the variables x, y, and z is
- where x > 1
- because x3 < x4, and so on.
In abstract algebra, one has to carefully distinguish between polynomials and polynomial functions. A polynomial f is defined to be a formal expression of the form
- X a = a X for all elements a of the ring R
- Xk Xl = Xk+l for all natural numbers k and l.
One can think of the ring R[X] as arising from R by adding one new element X to R and only requiring that X commute with all elements of R. In order for R[X] to form a ring, all sums of powers of X have to be included as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the clean construction of finite fields involves the use of those operations, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic).
To every polynomial f in R[X], one can associate a polynomial function with domain and range equal to R. One obtains the value of this function for a given argument r by everywhere replacing the symbol X in f's expression by r. The reason that algebraists have to distinguish between polynomials and polynomial functions is that over some rings R (for instance over finite fields), two different polynomials may give rise to the same polynomial function. This is not the case over the real or complex numbers and therefore analysts don't separate the two concepts.
In commutative algebra, one major focus of study is divisibility among polynomials. If R is an integral domain and f and g are polynomials in R[X], we say that f divides g if there exists a polynomial q in R[X] such that f q = g. One can then show that "every zero gives rise to a linear factor", or more formally: if f is a polynomial in R[X] and r is an element of R such that f(r) = 0, then the polynomial (X - r) divides f. The converse is also true. The quotient can be computed using the Horner scheme.
If F is a field and f and g are polynomials in F[X] with g ≠ 0, then there exist polynomials q and r in F[X] with
- f = q g + r
One also speaks of polynomials in several variables, obtained by taking the ring of polynomials of a ring of polynomials: R[X,Y] = (R[X])[Y] = (R[Y])[X]. These are of fundamental importance in algebraic geometry which studies the simultaneous zero sets of several such multivariate polynomials.
Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element.
Other related objects studied in abstract algebra are formal power series, which are like polynomials but may have infinite degree, and the rational functions, which are ratios of polynomials.
- Polynomial sequence
- Chebyshev polynomials
- Ehrhart polynomial (It is appropriate that this title is singular although some of the other special polynomials named after persons that are listed here are plural, because those are special polynomial sequences.)
- Hermite polynomials
- Hurwitz polynomial (It is appropriate that this title is singular although some of the other special polynomials named after persons that are listed here are plural, because those are special polynomial sequences.)
- Legendre polynomials
- Polynomial interpolation
- Binomial type
- Sheffer sequence
- List of polynomial topics | http://www.encyclopedia4u.com/p/polynomial.html | 13 |
61 | The surface gravity, g, of an astronomical or other object is the gravitational acceleration experienced at its surface. The surface gravity may be thought of as the acceleration due to gravity experienced by a hypothetical test particle which is very close to the object's surface and which, in order not to disturb the system, has negligible mass.
Surface gravity is measured in units of acceleration, which, in the SI system, are meters per second squared. It may also be expressed as a multiple of the Earth's standard surface gravity, g = 9.80665 m/s2. In astrophysics, the surface gravity may be expressed as log g, which is obtained by first expressing the gravity in cgs units, where the unit of acceleration is centimeters per second squared, and then taking the base 10 logarithm. Therefore, as gravity affects all things equally, regardless of their mass in grams or kilograms, and because 1 m/s2 = 100 cm/s2, the surface gravity of Earth could be expressed in cgs units as 980.665 cm/s2 and at base 10 logarithm (log g) as 2.992.
The surface gravity of a white dwarf is very high, and of a neutron star even more. The neutron star's compactness gives it a surface gravity of up to 7×1012 m/s² with typical values of a few ×1012 m/s² (that is more than 1011 times that of Earth). One measure of such immense gravity is the fact that neutron stars have an escape velocity of around 100,000 km/s, about a third of the speed of light.
Mass, radius and surface gravity
In the Newtonian theory of gravity, the gravitational force exerted by an object is proportional to its mass: an object with twice the mass produces twice as much force. Newtonian gravity also follows an inverse square law, so that moving an object twice as far away divides its gravitational force by four, and moving it ten times as far away divides it by 100. This is similar to the intensity of light, which also follows an inverse square law: with relation to distance, light exponentially becomes less visible.
A large object, such as a planet or star, will usually be approximately round, approaching hydrostatic equilibrium (where all points on the surface have the same amount of gravitational potential energy). On a small scale, higher parts of the terrain are eroded, with eroded material deposited in lower parts of the terrain. On a large scale, the planet or star itself deforms until equilibrium is reached. For most celestial objects, the result is that the planet or star in question can be treated as a near-perfect sphere when the rotation rate is low. However, for young, massive stars, the equatorial azimuthal velocity can be quite high—up to 200 km/s or more—causing a significant amount of equatorial bulge. Examples of such rapidly rotating stars include Achernar, Altair, Regulus A and Vega.
The fact that many large celestial objects are approximately spheres makes it easier to calculate their surface gravity. The gravitational force outside a spherically symmetric body is the same as if its entire mass were concentrated in the center, as was established by Sir Isaac Newton.[dubious ] Therefore, the surface gravity of a planet or star with a given mass will be approximately inversely proportional to the square of its radius, and the surface gravity of a planet or star with a given average density will be approximately proportional to its radius. For example, the recently discovered planet, Gliese 581 c, has at least 5 times the mass of Earth, but is unlikely to have 5 times its surface gravity. If its mass is no more than 5 times that of the Earth, as is expected, and if it is a rocky planet with a large iron core, it should have a radius approximately 50% larger than that of Earth. Gravity on such a planet's surface would be approximately 2.2 times as strong as on Earth. If it is an icy or watery planet, its radius might be as large as twice the Earth's, in which case its surface gravity might be no more than 1.25 times as strong as the Earth's.
These proportionalities may be expressed by the formula g = m/r2, where g is the surface gravity of an object, expressed as a multiple of the Earth's, m is its mass, expressed as a multiple of the Earth's mass (5.976·1024 kg) and r its radius, expressed as a multiple of the Earth's (mean) radius (6,371 km). For instance, Mars has a mass of 6.4185·1023 kg = 0.107 Earth masses and a mean radius of 3,390 km = 0.532 Earth radii. The surface gravity of Mars is therefore approximately
times that of Earth. Without using the Earth as a reference body, the surface gravity may also be calculated directly from Newton's Law of Gravitation, which gives the formula
so that, for fixed mean density, the surface gravity g is proportional to the radius r.
Since gravity is inversely proportional to the square of the distance, a space station 100 miles above the Earth feels almost the same gravitational force as we do on the Earth's surface. The reason a space station does not plummet to the ground is not that it is not subject to gravity, but that it is in a free-fall orbit.
Non-spherically symmetric objects
Most real astronomical objects are not absolutely spherically symmetric. One reason for this is that they are often rotating, which means that they are affected by the combined effects of gravitational force and centrifugal force. This causes stars and planets to be oblate, which means that their surface gravity is smaller at the equator than at the poles. This effect was exploited by Hal Clement in his SF novel Mission of Gravity, dealing with a massive, fast-spinning planet where gravity was much higher at the poles than at the equator.
To the extent that an object's internal distribution of mass differs from a symmetric model, we may use the measured surface gravity to deduce things about the object's internal structure. This fact has been put to practical use since 1915–1916, when Roland Eötvös's torsion balance was used to prospect for oil near the city of Egbell (now Gbely, Slovakia.), p. 1663;, p. 223. In 1924, the torsion balance was used to locate the Nash Dome oil fields in Texas., p. 223.
It is sometimes useful to calculate the surface gravity of simple hypothetical objects which are not found in nature. The surface gravity of infinite planes, tubes, lines, hollow shells, cones, and even more unrealistic structures may be used to provide insights into the behavior of real structures.
Surface gravity of a black hole
In relativity, the Newtonian concept of acceleration turns out not to be clear cut. For a black hole, which must be treated relativistically, one cannot define a surface gravity as the acceleration experienced by a test body at the object's surface. This is because the acceleration of a test body at the event horizon of a black hole turns out to be infinite in relativity. Because of this, a renormalized value is used that corresponds to the Newtonian value in the non-relativistic limit. The value used is generally the local proper acceleration (which diverges at the event horizon) multiplied by the gravitational redshift factor (which goes to zero at the event horizon). For the Schwarzschild case, this value is mathematically well behaved for all non-zero values of r and M.
When one talks about the surface gravity of a black hole, one is defining a notion that behaves analogously to the Newtonian surface gravity, but is not the same thing. In fact, the surface gravity of a general black hole is not well defined. However, one can define the surface gravity for a black hole whose event horizon is a Killing horizon.
The surface gravity of a static Killing horizon is the acceleration, as exerted at infinity, needed to keep an object at the horizon. Mathematically, if is a suitably normalized Killing vector, then the surface gravity is defined by
where the equation is evaluated at the horizon. For a static and asymptotically flat spacetime, the normalization should be chosen so that as , and so that . For the Schwarzschild solution, we take to be the time translation Killing vector , and more generally for the Kerr-Newman solution we take , the linear combination of the time translation and axisymmetry Killing vectors which is null at the horizon, where is the angular velocity.
The Schwarzschild solution
Since is a Killing vector implies . In coordinates . Performing a coordinate change to the advanced Eddington-Finklestein coordinates causes the metric to take the form
Under a general change of coordinates the Killing vector transforms as giving the vectors and
Considering the b=v entry for gives the differential equation
Therefore the surface gravity for the Schwarzschild solution with mass is
The Kerr-Newman solution
The surface gravity for the Kerr-Newman solution is
where is the electric charge, is the angular momentum, we define to be the locations of the two horizons and .
Dynamical black holes
Surface gravity for stationary black holes is well defined. This is because all stationary black holes have a horizon that is Killing. Recently there has been a shift towards defining the surface gravity of dynamical black holes whose spacetime does not admit a Killing vector (field). Several definitions have been proposed over the years by various authors. As of current, there is no consensus or agreement of which definition, if any, is correct.
- p. 29, The International System of Units (SI), ed. Barry N. Taylor, NIST Special Publication 330, 2001.
- Smalley, B. (2006-07-13). "The Determination of Teff and log g for B to G stars". Keele University. Retrieved 2007-05-31.
- Isaac Asimov (1978). The Collapsing Universe. Corgi. p. 44. ISBN 0-552-10884-7.
- Why is the Earth round?, at Ask A Scientist, accessed online May 27, 2007.
- Book I, §XII, pp. 218–226, Newton's Principia: The Mathematical Principles of Natural Philosophy, Sir Isaac Newton, tr. Andrew Motte, ed. N. W. Chittenden. New York: Daniel Adee, 1848. First American edition.
- Astronomers Find First Earth-like Planet in Habitable Zone, ESO 22/07, press release from the European Southern Observatory, April 25, 2007
- The HARPS search for southern extra-solar planets XI. Super-Earths (5 & 8 M_Earth) in a 3-planet system, S. Udry, X. Bonfils), X. Delfosse, T. Forveille, M. Mayor, C. Perrier, F. Bouchy, C. Lovis, F. Pepe, D. Queloz, and J.-L. Bertaux. arXiv:astro-ph/0704.3841.
- Detailed Models of super-Earths: How well can we infer bulk properties?, Diana Valencia, Dimitar D. Sasselov, and Richard J. O'Connell, arXiv:astro-ph/0704.3454.
- 2.7.4 Physical properties of the Earth, web page, accessed on line May 27, 2007.
- Mars Fact Sheet, web page at NASA NSSDC, accessed May 27, 2007.
- Ellipsoid, geoid, gravity, geodesy, and geophysics, Xiong Li and Hans-Jürgen Götze, Geophysics, 66, #6 (November–December 2001), pp. 1660–1668. DOI 10.1190/1.1487109.
- Prediction by Eötvös' torsion balance data in Hungary, Gyula Tóth, Periodica Polytechnica Ser. Civ. Eng. 46, #2 (2002), pp. 221–229.
- Wald, Robert (1984). General Relativity. University Of Chicago Press. ISBN 978-0-226-87033-5.
- Nielsen, Alex; Yoon (2008). "Dynamical Surface Gravity". Classical Quantum Gravity 25.
- Pielahn, Mathias; G. Kunstatter, A. B. Nielsen (November 2011). "Dynamical surface gravity in spherically symmetric black hole formation". Physical Review D 84 (10): 104008(11). arXiv:1103.0750. Bibcode:2011PhRvD..84j4008P. doi:10.1103/PhysRevD.84.104008. | http://en.wikipedia.org/wiki/Surface_gravity | 13 |
56 | Energy density is the amount of energy stored in a given system or region of space per unit volume. Often only the useful or extractable energy is quantified, which is to say that chemically inaccessible energy such as rest mass energy is ignored. Quantified energy is energy that has some sort of, as the name suggests, quantified magnitude with related units.
For fuels, the energy per unit volume is sometimes a useful parameter. Comparing, for example, the effectiveness of hydrogen fuel to gasoline, hydrogen has a higher specific energy (energy per unit mass) than gasoline does, but, even in liquid form, a much lower volumetric energy density.
Energy per unit volume has the same physical units as pressure, and in many circumstances is an exact synonym: for example, the energy density of the magnetic field may be expressed as (and behaves as) a physical pressure, and the energy required to compress a compressed gas a little more may be determined by multiplying the difference between the gas pressure and the pressure outside by the change in volume. In short, pressure is a measure of the volumetric enthalpy of a system, that is, the enthalpy per unit volume. A pressure gradient has a potential to perform work on the surroundings by converting enthalpy until equilibrium is reached.
Introduction to energy density
Stored energy can take many forms, and there are several types of reactions that release energy. In order of typical magnitude, these are: Nuclear, chemical, electrochemical, and electrical.
Chemical reactions are used by animals to derive energy from food, and by automobiles to derive energy from gasoline. Electrochemical reactions are used by most mobile devices such as laptop computers and mobile phones.
Energy densities of common energy storage materials
The following is a list of the energy densities of commonly used or well-known energy storage materials; it doesn't include uncommon or experimental materials. Note that this list does not consider the mass of reactants commonly available such as the oxygen required for combustion.
|Storage material||Energy type||MJ per kilogram||MJ per liter (litre)||Direct uses|
|Deuterium–tritium||Nuclear fusion||330 000 000||0.14 ||Proposed power plants (under development)|
|Uranium-235||Nuclear fission||83 140 000||1 546 000 000||Electric power plants (nuclear reactors)|
|Hydrogen (compressed at 70 MPa)||Chemical||123||5.6||Experimental automotive engines|
|Gasoline (petrol) / Diesel||Chemical||~46||~36||Automotive engines|
|Propane (including LPG)||Chemical||46.4||26||Cooking, home heating, automotive engines|
|Fat (animal/vegetable)||Chemical||37||Human/animal nutrition|
|Coal||Chemical||24||Electric power plants, home heating|
|Carbohydrates (including sugars)||Chemical||17||Human/animal nutrition|
|Wood||Chemical||16.2||Heating, outdoor cooking|
|Lithium battery||Electrochemical||1.8||4.32||Portable electronic devices, flashlights (non-rechargeable)|
|Lithium-ion battery||Electrochemical||0.72-0.875||0.9-2.63||Laptop computers, mobile devices, some modern electric vehicles|
|Alkaline battery||Electrochemical||0.67||1.8||Portable electronic devices, flashlights|
|Nickel-metal hydride battery||Electrochemical||0.288||0.504-1.08||Portable electronic devices, flashlights|
|Lead-acid battery||Electrochemical||0.17||0.34||Automotive engine ignition|
|Electrostatic capacitor||Electrical||0.000036||Electronic circuits|
|Storage device||Energy type||Energy content||Typical mass||W × H × D (mm)||Uses|
|Automotive battery (lead-acid)||Electrochemical||2.6 megajoules||15 kilograms||230 × 180 × 185||Automotive starter motor and accessories|
|Alkaline "battery" (AA size)||Electrochemical||15.4 kilojoules||23 grams||14.5 × 50.5 × 14.5||Portable electronic equipment, flashlights|
|lithium-ion battery (Nokia BL-5C)||Electrochemical||12.9 kilojoules||18.5 grams||54.2 × 33.8 × 5.8||Mobile phones|
Energy density in energy storage and in fuel
In energy storage applications the energy density relates the mass of an energy store to the volume of the storage facility, e.g. the fuel tank. The higher the energy density of the fuel, the more energy may be stored or transported for the same amount of volume. The energy density of a fuel per unit mass is called the specific energy of that fuel. In general an engine using that fuel will generate less kinetic energy due to inefficiencies and thermodynamic considerations—hence the specific fuel consumption of an engine will always be greater than its rate of production of the kinetic energy of motion.
The greatest energy source by far consists of mass itself. This energy, E = mc2, where m = ρV, ρ is the mass per unit volume, V is the volume of the mass itself and c is the speed of light. This energy, however, can be released only by the processes of nuclear fission (.1%), nuclear fusion (1%), or the annihilation of some or all of the matter in the volume V by matter-antimatter collisions (100%). Nuclear reactions cannot be realized by chemical reactions such as combustion. Although greater matter densities can be achieved, the density of a neutron star would approximate the most dense system capable of matter-antimatter annihilation possible. A black hole, although denser than a neutron star, doesn't have an equivalent anti-particle form.
The highest density sources of energy aside from antimatter are fusion and fission. Fusion includes energy from the sun which will be available for billions of years (in the form of sunlight) but so far (2011), sustained fusion power production continues to be elusive. Fission of uranium and thorium in nuclear power plants will be available for a long time due to the vast supply of the element on earth, though the full potential of this source can only be realised through breeder reactors, which are not yet used commercially.Coal, gas, and petroleum are the current primary energy sources in the U.S. but have a much lower energy density. Burning local biomass fuels supplies household energy needs (cooking fires, oil lamps, etc.) worldwide.
Energy density (how much energy you can carry) does not tell you about energy conversion efficiency (net output per input) or embodied energy (what the energy output costs to provide, as harvesting, refining, distributing, and dealing with pollution all use energy). Like any process occurring on a large scale, intensive energy use impacts the world. For example, climate change, nuclear waste storage, and deforestation may be some of the consequences of supplying our growing energy demands from carbohydrate fuels, nuclear fission, or biomass.
No single energy storage method boasts the best in specific power, specific energy, and energy density. Peukert's Law describes how the amount of useful energy that can be obtained (for a lead-acid cell) depends on how quickly we pull it out. To maximize both specific energy and energy density, one can compute the specific energy density of a substance by multiplying the two values together, where the higher the number, the better the substance is at storing energy efficiently.
Gravimetric and volumetric energy density of some fuels and storage technologies (modified from the Gasoline article):
- Note: Some values may not be precise because of isomers or other irregularities. See Heating value for a comprehensive table of specific energies of important fuels.
- Note: Also it is important to realise that generally the density values for chemical fuels do not include the weight of oxygen required for combustion. This is typically two oxygen atoms per carbon atom, and one per two hydrogen atoms. The atomic weight of carbon and oxygen are similar, while hydrogen is much lighter than oxygen. Figures are presented this way for those fuels where in practice air would only be drawn in locally to the burner. This explains the apparently lower energy density of materials that already include their own oxidiser (such as gunpowder and TNT), where the mass of the oxidiser in effect adds dead weight, and absorbs some of the energy of combustion to dissociate and liberate oxygen to continue the reaction. This also explains some apparent anomalies, such as the energy density of a sandwich appearing to be higher than that of a stick of dynamite.
||This section may require cleanup to meet Wikipedia's quality standards. (October 2008)|
Energy densities ignoring external components
This table lists energy densities of systems that require external components, such as oxidisers or a heat sink or source. These figures do not take into account the mass and volume of the required components as they are assumed to be freely available and present in the atmosphere. Such systems cannot be compared with self-contained systems. These values may not be computed at the same reference conditions. Most of them seem to be higher heating value (HHV).
|Storage type||Specific energy (MJ/kg)||Energy density (MJ/L)||Peak recovery efficiency %||Practical recovery efficiency %|
|Planck energy density||8.99e10||4.633016e104|
|Hydrogen, at 690 bar and 15°C||141.86||4.5|
|Methane (1.013 bar, 15°C)||55.6||0.0378|
|LNG (NG at −160°C)||53.6||22.2|
|CNG (NG compressed to 250 bar/~3,600 psi)||53.6||9|
|Crude oil (according to the definition of ton of oil equivalent)||46.3||37|
|Diesel fuel/residential heating oil ||46.2||37.3|
|Gasohol E10 (10% ethanol 90% gasoline by volume)||43.54||33.18|
|Jet A aviation fuel/kerosene||42.8||33|
|Biodiesel oil (vegetable oil)||42.20||33|
|DMF (2,5-dimethylfuran)[clarification needed]||42||37.8|
|Body fat metabolism||38||35||22|
|Gasohol E85 (85% ethanol 15% gasoline by volume)||33.1||25.65|
|Coal, anthracite||32.5||72.4[dubious ]||36|
|Polyester plastic||26.0 ||35.6|
|PET plastic||23.5 (impure)|
|Hydrazine (toxic) combusted to N2+H2O||19.5||19.3|
|Liquid ammonia (combusted to N2+H2O)||18.6||11.5|
|PVC plastic (improper combustion toxic)[clarification needed]||18.0||25.2|
|Peat briquette ||17.7|
|Sugars, carbohydrates, and protein metabolism||17||26.2(dextrose)||22|
|Dry cow dung and cameldung||15.5|
|Coal, lignite||14.0|
|Sodium (burned to wet sodium hydroxide)||13.3||12.8|
|Sulfur (burned to sulfur dioxide)||9.23||19.11|
|Sodium (burned to dry sodium oxide)||9.1||8.8|
|Battery, lithium-air rechargeable||9.0|
|Iron (burned to iron(III) oxide)||5.2||40.68|
|Teflon plastic (combustion toxic, but flame retardant)||5.1||11.2|
|Iron (burned to iron(II) oxide)||4.9||38.2|
|Liquid nitrogen[clarification needed]||0.77||0.62|
|Compressed air at 300 bar (potential energy)||0.5||0.2||>50%|
|Latent heat of fusion of ice (thermal)||0.335||0.335|
|Water at 100 m dam height (potential energy)||0.001||0.001||citation needed]85-90%[|
|Storage type||Energy density by mass (MJ/kg)||Energy density by volume (MJ/L)||Peak recovery efficiency %||Practical recovery efficiency %|
Energy density of electric and magnetic fields
where E is the electric field and B is the magnetic field. The solution will be in Joules per cubic metre. In the context of magnetohydrodynamics, the physics of conductive fluids, the magnetic energy density behaves like an additional pressure that adds to the gas pressure of a plasma.
In normal (linear) substances, the energy density (in SI units) is
- ^ "Aircraft Fuels." Energy, Technology and the Environment Ed. Attilio Bisio. Vol. 1. New York: John Wiley and Sons, Inc., 1995. 257–259
- "Fuels of the Future for Cars and Trucks" - Dr. James J. Eberhardt - Energy Efficiency and Renewable Energy, U.S. Department of Energy - 2002 Diesel Engine Emissions Reduction (DEER) Workshop San Diego, California - August 25–29, 2002
- The Inflationary Universe: The Quest for a New Theory of Cosmic Origins by Alan H. Guth (1998) ISBN 0-201-32840-2
- Cosmological Inflation and Large-Scale Structure by Andrew R. Liddle, David H. Lyth (2000) ISBN 0-521-57598-2
- Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964
- "The Two Classes of SI Units and the SI Prefixes". NIST Guide to the SI. Retrieved 2012-01-25.
- "Facts from Cohen". Formal.stanford.edu. 2007-01-26. Retrieved 2010-05-07.
- "U.S. Energy Information Administration (EIA) - Annual Energy Review". Eia.doe.gov. 2009-06-26. Archived from the original on 2010-05-06. Retrieved 2010-05-07.
- Hydrogen properties Hydrogen Properties. Retrieved 2011-11-30.
- "Boron: A Better Energy Carrier than Hydrogen? (28 February 2009)". Eagle.ca. Retrieved 2010-05-07.
- Envestra Limited. Natural Gas. Retrieved 2008-10-05.
- IOR Energy. List of common conversion factors (Engineering conversion factors). Retrieved 2008-10-05.
- Paul A. Kittle, Ph.D. "ALTERNATE DAILY COVER MATERIALS AND SUBTITLE D - THE SELECTION TECHNIQUE". Retrieved 2012-01-25.
- "537.PDF" (PDF). June 1993. Retrieved 2012-01-25.
- "Energy Density of Aviation Fuel". Hypertextbook.com. Retrieved 2010-05-07.
- Nature. "Production of dimethylfuran for liquid fuels from biomass-derived carbohydrates : Abstract". Nature. Retrieved 2010-05-07.
- Justin Lemire-Elmore (2004-04-13). "The Energy Cost of Electric and Human-Powered Bicycles". p. 5. Retrieved 2009-02-26. "properly trained athlete will have efficiencies of 22 to 26%"
- Fisher, Juliya (2003). "Energy Density of Coal". The Physics Factbook. Retrieved 2006-08-25.
- Silicon as an intermediary between renewable energy and hydrogen
- "Elite_bloc.indd" (PDF). Retrieved 2010-05-07.
- "Biomass Energy Foundation: Fuel Densities". Woodgas.com. Archived from the original on 2010-01-10. Retrieved 2010-05-07.
- "Bord na Mona, Peat for Energy". Bnm.ie. Archived from the original on 2007-11-19. Retrieved 2012-01-25.
- Justin Lemire-Elmor (April 13, 2004). "The Energy Cost of Electric and Human-Powered Bicycle". Retrieved 2012-01-25.
- "energy buffers". Home.hccnet.nl. Retrieved 2010-05-07.
- Anne Wignall and Terry Wales. Chemistry 12 Workbook, page 138. Pearson Education NZ ISBN 978-0-582-54974-6
- Mitchell, Robert R.; Betar M. Gallant; Carl V. Thompson; Yang Shao-Horn (2011). "All-carbon-nanofiber electrodes for high-energy rechargeable Li–O2 batteries". Energy & Environmental Science 4: 2952–2958. doi:10.1039/C1EE01496J.
- David E. Dirkse. energy buffers. "household waste 8..11 MJ/kg"
- "Technical bulletin on Zinc-air batteries". Duracell. Archived from the original on 2009-01-27. Retrieved 2009-04-21.
- C. Knowlen, A.T. Mattick, A.P. Bruckner and A. Hertzberg, "High Efficiency Conversion Systems for Liquid Nitrogen Automobiles", Society of Automotive Engineers Inc, 1988. | http://en.m.wikipedia.org/wiki/Energy_density | 13 |
96 | This chapter is optional. If you expect to be working with individual bits, these operators are very helpful. Otherwise, if you don’t expect to be working with anything other than plain-old decimal numbers, you can skip this chapter.
While we write numbers using decimal digits, in base 10, computers don’t really work that way internally. We touched on the computer’s view in Octal and Hexadecimal – Counting by 8’s or 16’s. Internally, the computer works in binary, base 2, which makes the circuitry very simple and very fast. One of the benefits of using Python is that we don’t need to spend much time on the internals, so this chapter is optional.
We’ll take a close look at data in Bits and Bytes, this will provide some justification for having base 8 and base 16 numbers. We’ll add some functions to see base 8 and base 16 in Different Bases and Representations. Then we’ll look at the operators for working with individual bits in Operators for Bit Manipulation.
The special operators that we’re going to cover in this chapter work on individual bits. First, we’ll have to look at what this really means. Then we can look at what the operators do to those things called bits.
A bit is a “binary digit” . The concept of bit closely parallels the concept of decimal digit with one important difference. There are only two binary digits (0 and 1), but there are 10 decimal digits (0 through 9).
Decimal Numbers. Our decimal numbers are a sequence of digits using base 10. Each decimal digit’s place value is a power of 10. We have the 1,000’s place, the 100’s place, the 10’s place and the 1’s place. A number like 2185 is .
Binary Numbers. Binary numbers are a sequence of binary digits using base 2. Each bit’s place value in the number is a power of 2. We have the 256’s place, the 128’s place, the 64’s place, the 32’s place, the 16’s place, the 8’s, 4’s, 2’s and the 1’s place. We can’t directly write binary numbers in Python. We’ll show them as a series of bits, like this 1-0-0-0-1-0-0-0-1-0-0-1. This starts with a 1 in the 2048’s place, a 1 in the 128’s place, plus a 1 in the 8’s place, plus a 1, which is 2185.
Octal Numbers. Octal numbers use base 8. In Python, we begin octal numbers with a leading zero. Each octal digit’s place value is a power of 8. We have the 512’s place, the 64’s place, the 8’s place and the 1’s place. A number like 04211 is . This has a value of 2185.
Each group of three bits forms an octal digit. This saves us from writing out all those bits in detail. Instead, we can summarize them.
Binary: 1-0-0 0-1-0 0-0-1 0-0-1 Octal: 4 2 1 1
Hexadecimal Numbers. Hexadecimal numbers use base 16. In Python, we begin hexadecimal numbers with a leading 0x. Since we only have 10 digits, and we need 16 digits, we’ll borrow the letters a, b, c, d, e and f to be the extra digits. Each hexadecimal digit’s place value is a power of 16. We have the 4096’s place, the 256’s place, the 16’s place and the 1’s place. A number like 0x8a9 is , which has a value of 2217.
Each group of four bits forms a hexadecimal digit. This saves us from writing out all those bits in detail. Instead, we can summarize them.
Binary: 1-0-0-0 1-0-1-0 1-0-0-1 Hexadecimal: 8 a 9
Bytes. A byte is 8 bits. That means that a byte contains bits with place values of 128, 64, 32, 16, 8, 4, 2, 1. If we set all of these bits to 1, we get a value of 255. A byte has 256 distinct values. Computer memory is addressed at the individual byte level, that’s why you buy memory in units measured in megabytes or gigabytes.
In addition to small numbers, a single byte can store a single character encoded in ASCII. It takes as many as four bytes to store characters encoded with Unicode.
An integer has 4 bytes, which is 32 bits. In looking at the special operators, we’ll look at them using integer values. Python can work with individual bytes, but it does this by unpacking a byte’s value and saving it in a full-sized integer.
In Octal and Hexadecimal – Counting by 8’s or 16’s we saw that Python will accept base 8 or base 16 (octal or hexadecimal) numbers. We begin octal numbers with 0, and use digits 0 though 7. We begin a hexadecimal number with 0x and use digits 0 through 9 and a through f.
Python normally answers us in decimal. How can we ask Python to answer in octal or hexadecimal instead?
The hex() function converts its argument to a hexadecimal (base 16) string. A string is used because additional digits are needed beyond 0 through 9; a-f are pressed into service. A leading 0x is placed on the string as a reminder that this is hexadecimal. Here are some examples:
>>> hex(684) '0x2ac' >>> hex(1023) '0x3ff' >>> 0xffcc33 16763955 >>> hex(_) '0xffcc33'
Note that the result of the hex() function is technically a string, An ordinary number would be presented as a decimal value, and couldn’t contain the extra hexadecimal digits. That’s why there are apostrophes in our output.
The oct() function converts its argument to an octal (base 8) string. A leading 0 is placed on the string as a reminder that this is octal not decimal. Here are some examples:
>>> oct(512) '01000' >>> oct(509) '0775'
Here are the formal definitions.
More Hexadecimal and Octal tools. The hex() and oct() functions make a number into a specially-formatted string. The hex() function creates a string using the hexadecimal digit characters. The oct() uses the octal digits. There is a function which goes the other way: it can convert strings of digit characters into proper numbers so we can do arithmetic.
The int() function has two forms. The int(x) form converts a decimal string, x, to an integer. For example int('25') is 25. The int(x,b) form converts a string, x, in base b to an integer.
In case you don’t recall how this works, remember that in the number 1985, we’re implicitly computing 1*10**3 + 9*10**2 + 8*10 + 5. Each digit has a place value that is a power of some number. That number is the “base” for the numbers we’re writing. Python assumes that a string of digits is decimal. A string of digits which begins with 0 is in base 8. A string of digits which begins with 0x is in base 16.
Here are some examples of converting strings that are in other bases to good old base 10 numbers.
>>> int('010101',2) 21 >>> int('321',4) 57 >>> int('2ac',16) 684
In base 2, the place values are 32, 16, 8, 4, 2, 1. The string 10101 is evaluated as .
In base 4, the place values are 16, 4 and 1. The string 321 is evaluated as .
Recall from Octal and Hexadecimal – Counting by 8’s or 16’s that we have to press additional symbols into service to represent base 16 numbers. We use the letters a-f for the digits after 9. The place values are 256, 16, 1; the string 2ac is evaluated as .
While it seems so small, it’s really important that numbers in another base are written using strings. To Python, 123 is a decimal number. '123' is a string, and could mean anything. When you say int('123',4), you’re telling Python that the string '123' should be interpreted as base 4 number, which maps to 27 in base 10 notation. On the other hand, when you say int('123'), you’re telling Python that the string '123' should be interpreted as a base 10 number, which is 123.
We’ve already seen the usual math operators: +, -, *, /, %, **; as well as a large collection of mathematical functions. While these do a lot, there are still more operators available to us. In this section, we’ll look at operators that directly manipulate the binary representation of numbers. The inhabitants of Binome (see Binary Codes are more comfortable with these operators than we are.
We won’t wait for the FAQ’s to explain why we even have these operators. These operators exist to provide us with a view of the real underlying processor. Consequently, they are used for some rather specialized purposes. We present them because they can help newbies get a better grip on what a computer really is.
In this section, we’ll see a lot of hexadecimal and octal numbers. This is because base 16 and base 8 are also nifty short-hand notation for lengthy base 2 numbers. We’ll look at hexadecimal and octal numbers first. Then we’ll look at the bit-manipulation operators.
There are some other operators available, but, strictly speaking, they’re not arithmetic operators, they’re logic operations. We’ll return to them in Processing Only When Necessary : The if Statement.
Precedence. We know one basic precedence rule that applies to multiplication and addition: Python does multiplication first, and addition later. The second rule is that () ‘s group things, which can change the precedence rules. 2*3+4 is 10, but 2*(3+4) is 14.
Where do these special operators fit? Are they more important than multiplication? Less important than addition? There isn’t a simple rule. Consequently, you’ll often need to use ()‘s to make sure things work out the way you want.
The unary ~ operator flops all the bits in a plain or long integer. 1’s become 0’s and 0’s become 1’s. Note that this will have unexpected consequences when the bits are interpreted as a decimal number.
>>> ~0x12345678 -305419897 >>> hex(~0x12345678) '-0x12345679'
What makes this murky is the way Python interprets the number has having a sign. The computer hardware uses a very clever trick to handle signed numbers. First, let’s visualize the unsigned, binary number line, it has 4 billion positions. At the left we have all bits set to zero. In the middle we have a value where the 2-billionth place is 1 and all other values are zero. At the right we have all bits set to one.
Now, let’s redraw the number line with positive and negative signs. Above the line, we put the signed values that Python will show us. Below the line, we put the internal codes used. The positive numbers are what we expected: 0x00000000 is the full 32-bit value for zero, 1 is 0x00000001; no surprise there. Below the 2 billion, we put 0x7fffffff. That’s the full 32-bit value for positive 2 billion (try it in Python and see.) Below the -2 billion, we put 0x80000000, the full 32-bit value for -2 billion. Below the -1, we put 0xffffffff.
This works very nicely. Let’s start with -2 (0xfffffffe). We add 1 to this and get -1 (0xffffffff), just what we want. We add 1 to that and get 0x00000000, and we have to carry the 1 into the next place value. However, there is no next place value, the 1 is discarded, and we have a good-old zero.
This technique is called 2’s complement . Consequently, the ~ operation is mathematically equivalent to adding 1 and switching the number’s sign between positive and negative.
This operator has the same very high precedence as the ordinary negation operation, - . Try the following to see what happens. First, what’s the value of -5+4 ? Now, add the two possible () ‘s and see which result is the same: (-5)+4 and -(5+4) . The one the produces the same result as -5+4 reveals which way Python performs the operations.
Here are some examples of special ops mixed with ordinary operations.
>>> -5+4 -1 >>> -(5+4) -9 >>> (-5)+4 -1
The binary & operator returns 1-bits everywhere that the two input bits are both 1. Each result bit depends on one input bit and the other input bit both being 1. The following example shows all four combinations of bits that work with the & operator.
>>> 0&0, 1&0, 1&1, 0&1 (0, 0, 1, 0)
Here’s the same kind of example, combining sequences of bits. This takes a bit of conversion to base 2 to understand what’s going on.
>>> 3 & 5 1
The number 3, in base 2, is 0011 . The number 5 is 0101 . Let’s match up the bits from left to right:
0 0 1 1 & 0 1 0 1 ------- 0 0 0 1
This is a very low-priority operator, and almost always needs parentheses when used in an expression with other operators. Here are some examples that show you how & and + combine.
>>> 3&2+3 1 >>> 3&(2+3) 1 >>> (3&2)+3 5
The binary ^ operator returns a 1-bit if one of the two inputs are 1 but not both. This is sometimes called the exclusive or operation to distinguish it from the inclusive or . Some people write “and/or” to emphasize the inclusive sense of or. They write “either-or” to emphasize the exclusive sense of or.
>>> 3^5 6
Let’s look at the individual bits
0 0 1 1 ^ 0 1 0 1 ------- 0 1 1 0
Which is the binary representation of the number 6.
This is a very low-priority operator, be sure to parenthesize your expression correctly.
The binary | operator returns a 1-bit if either of the two inputs is 1. This is sometimes called the inclusive or to distinguish it from the exclusive or` operator.
>>> 3|5 7
Let’s look at the individual bits.
0 0 1 1 | 0 1 0 1 ------- 0 1 1 1
Which is the binary representation of the number 7.
This is a very low-priority operator, and almost always needs parentheses when used in an expression with other operators. When we combine & ‘s and | ‘s we have to be sure we’ve grouped them properly. Here’s the kind of thing that you’ll sometimes see in programs that build up specific patterns of bits.
>>> 3&0x1f | 0x80 | 0x100 387 >>> hex(_) '0x183'
Let’s look at this in a little bit of detail. Our first expression has two or operations, they’re the lowest priority operators. The first or operation has 3&0x1f or 0x80 . So, Python does the following steps to evaluate this expression.
The << is the left-shift operator. The left argument is the bit pattern to be shifted, the right argument is the number of bits. This is mathematically equivalent to multiplying by a power of two, but much, much faster. Shifting left 3 positions, for example, multiplies the number by 8.
This operator is higher priority than & , ^ and | . Be sure to use parenthesis appropriately.
>>> 0xA << 2 40
0xA is hexadecimal; the bits are 1-0-1-0. This is 10 in decimal. When we shift this two bits to the left, it’s like multiplying by 4. We get bits of 1-0-1-0-0-0. This is 40 in decimal.
The >> is the right-shift operator. The left argument is the bit pattern to be shifted, the right argument is the number of bits. Python always behaves as though it is running on a 2’s complement computer. The left-most bit is always the sign bit, so sign bits are shifted in. This is mathematically equivalent to dividing by a power of two, but much, much faster. Shifting right 4 positions, for example, divides the number by 16.
This operator is higher priority than &, ^ and | . Be sure to use parenthesis appropriately.
>>> 80 >> 3 10
The number 80, with bits of 1-0-1-0-0-0-0, shifted right 3 bits, yields bits of 1-0-1-0, which is 10 in decimal.
Debugging Special Operators
The most common problems with the bit-fiddling operators is confusion about the relative priority of the operations. For conventional arithmetic operators, ** is the highest priority, * and / are lower priority and + and - are the lowest priority. However, among &, ^ and |, << and >> it isn’t obvious what the priorities are or should be.
When in doubt, add parenthesis to force the order you want.
One common color-coding scheme uses three distinct values for the level of red, green and blue that make up each picture element (pixel) in an image. If we allow 256 different levels of red, green and blue, we can mash a single pixel in 24 bits. We can then cram 4 pixels into 3 plain-integer values. How do we unwind this packed data?
We’ll have to use our bit-fiddling operators to unwind this compressed data into a form we can process. First, we’ll look at getting the red, green and blue values out of a single plain integer.
We can code 256 levels in 8 bits, which is two hexadecimal digits. This gives us a red, green and blue levels from 0x00 to 0xFF (0 to 255 decimal). We can string the red, green and blue together to make a larger composite number like 0x0c00a2 for a very bluish purple.
What is 0x0c00a2 & 0xff? Is this the blue value of 0xa2? Does it help to do hex( 0x0c00a2 & 0xff)?
What is (0x0c00a2 & 0xff00) >> 8? hex( (0x0c00a2 & 0xff00) >> 8 )?
What is (0x0c00a2 & 0xff0000) >> 16? hex( (0x0c00a2 & 0xff0000) >> 16 )?
How can we break a number down into different digits?
What is 1956 / 1000? 1956 % 1000?
What is 956 / 100? 956 % 100?
What is 56 / 10? 56 % 10?
What happens if we do this procedure with 1956., 956. and 56. instead of 1956 , 956 and 56? Can we use the // operator to make this work out correctly? | http://www.itmaybeahack.com/homepage/books/nonprog/html/p03_expressions/p03_c04_spec.html | 13 |
60 | Word diagrams in teaching classical conditioning.An essential component of concept teaching is to present students with a clear specification of the structure of a concept, which consists of a statement of the critical features of a concept and how those features are related. Clear specifications of concept structure have been empirically validated in research (a) showing the value of concept definitions (Anderson & Kulhavy, 1972; Johnson & Stratton, 1966; Miller & Weaver, 1976) and (b) showing the value of analysis statements, explanations of how the concept structure is fully represented in examples and incompletely represented in nonexamples (Grant, McAvoy, & Keenan, 1982; Tennyson, Steve, & Boutwell, 1975). Whereas concept definitions describe the concept in abstract terms those which express abstract ideas, as beauty, whiteness, roundness, without regarding any object in which they exist; or abstract terms are the names of orders, genera or species of things, in which there is a combination of similar qualities.
See also: Abstract , analysis statements relate the words in a concept definition to concrete examples of a concept. For example, a definition of a square is a four-sided figure in which the equal sides intersect one another at right angles so as to form a right angle or right angles, as when one line crosses another perpendicularly.
See also: Right . In this context, the analysis statements would emphasize to stu dents how the verbal descriptors in this concept definition (e.g., "four-sided," "equal sides," "right angles") are present in specific examples of squares and are absent in nonexamples of squares (e.g., rectangles, nonsquare parallelograms). Applications of concept learning principles to instruction in behavior-analysis concepts have typically relied on standard text to present concept definitions and analysis statements to students (Grant, 1996; Miller, 1980; Peterson, 1978; Reese & Woolfenden, 1973). In the present experiment, the concept definition and the analysis statements were supplemented by diagrams. The purpose of the study was to determine if diagrams that represent concept definitions and examples improve student learning.
Much of the richness of the discipline of behavior analysis is in the study of complex behavior-environment relations or contingencies. This richness, however, poses one of the major difficulties in teaching behavior-analysis because complex relations between behavior and the environment are difficult to describe using prose definitions. Recognizing this problem, several observers have proposed diagramming systems for clarifying contingency relations (Goldwater & Acker, 1995; Hummel hummel
entire, naturally polled deer. , Kaeck, Bowes, & Rittenhouse, 1994; Malott, 1992; Mechner, 1959; Snapper snapper, name for members of the Lutianidae, a family of spiny-finned food and game fishes found chiefly in tropical coastal waters. Snappers are carnivorous, active, and voracious, with large mouths and sharp teeth. Most species travel in dense schools. , Kadden, & Inglis, 1982). Relatedly, educators have also advocated using diagrams to represent complex behavior-environment relations in order to help students learn these concepts (Goldwater & Acker, 1995; Mattaini, 1995; Michael & Schafer, 1995).
The advocates of diagrams to teach behavior-analysis concepts have advanced persuasive arguments in favor of using diagrams that have considerable intuitive appeal. However, research into the effectiveness of diagrams in communication and teaching has been lacking. The present experiment examined the effectiveness of a simple word diagram in teaching classical conditioning Classical conditioning
The memory system that links perceptual information to the proper motor response. For example, Ivan Pavlov conditioned a dog to salivate when a bell was rung. . Basic word diagrams have long been used to illustrate classical conditioning in introductory psychology texts and texts in the psychology of learning (e.g., Keller & Schoenfeld, 1950, p. 19, p. 31), even though the use of these diagrams has never been empirically validated. In an informal survey of behavior-analysis textbooks, Goldwater and Acker (1995) expressed concern that recent textbooks have omitted diagrams.
In the present study, students used diagrams as part of a self-instructional concept-teaching program. The diagrams supplemented ordinary text as a means of showing how the definition applied to specific examples of classical conditioning.
Sixty university students from introductory psychology participated to fulfill a course requirement. Their grades were independent of performance in the study. All participants complied with the experimental procedures.
When students arrived at the research site they obtained a materials package that included written directions. Initially, students read a 500-word introductory lesson on classical conditioning concepts. This lesson defined and exemplified unconditioned unconditioned /un·con·di·tion·ed/ (un?kon-dish´und) not a result of conditioning; unlearned; occurring naturally or spontaneously. reflexes, conditioned reflexes, unconditioned stimuli (US), unconditioned responses (UR), neutral stimuli (NS), conditioned stimuli (CS), and conditioned responses (CR). The lesson also compared and contrasted classical conditioning with both pseudoconditioning and operant conditioning operant conditioning
A process of behavior modification in which a subject is encouraged to behave in a desired manner through positive or negative reinforcement, so that the subject comes to associate the pleasure or displeasure of the .
After reading the introductory lesson, the students read a conceptual exercise consisting of examples and nonexamples of classical conditioning. Each of the examples was based on a published account of human classical conditioning. One of the nonexamples illustrated operant conditioning and one illustrated pseudoconditioning. Beneath each item was an "analysis" that identified the item as an example or a nonexample. For examples, the analysis identified the US, UR, NS, CS, and CR. For nonexamples, the analysis explained why the item lacked the critical features of classical conditioning and when applicable identified the item as an instance of operant conditioning or pseudoconditioning.
After completing the conceptual exercise, students took a postest consisting of six novel (i.e., previously unseen) items, three examples of classical conditioning (Ellson, 1941; Razran, 1949; Vaitl, Gruppe, & Kimmel, 1985) and three nonexamples. One nonexample was an instance of operant conditioning, one was an instance of pseudoconditioning, and one was an instance of neither operant conditioning nor pseudoconditioning.
For each item, each student was required to classify the item as an example or a nonexample and to analyze the presence or absence of the critical features (i.e., US, UR, NS, CS, and CR). For the nonexamples each student was also asked to identify them as examples of operant conditioning or of pseudoconditioning. The posttest post·test
A test given after a lesson or a period of instruction to determine what the students have learned. instructions did not specify whether the student should draw diagrams in answering the items.
Experimental Treatments and Design
There were two experimental treatments: Matched versus unmatched examples/nonexamples and the use of diagrams. In the matched condition, the conceptual exercise consisted of 14 example/nonexample pairs presented side by side. The examples (Bierley, McSweeney, & Vannieuwkerk, 1985; Cannon & Baker, 1981; Cannon, Best, Batson, & Feldman, 1983; Clarke & Hayes, 1984; Dekker, Pelser, & Groen, 1964; Doerr, 1981; Efron, 1964; Hayduk, 1980; Kasatkin & Levikova, 1932; McConaghy, 1970; Quarti & Renaud, 1964; Switzer, 1933; Watson & Rayner, 1920; Wolpe, 1958) illustrated classical conditioning. The matched nonexamples were modified versions of the examples changed so that they did not illustrate classical conditioning. Of the 14 nonexamples in the matched condition, 6 were examples of pseudoconditioning, 3 were examples of operant conditioning, and 5 were examples of neither pseudoconditioning nor operant conditioning. With minor modifications, the examples and nonexamples of classical conditioning used in the present st udy are included in Grant and Evans' (1994, pp. 412-417, pp. 506-512) self-instructional exercise over classical conditioning.
In the unmatched condition, the conceptual exercise consisted of 14 items, 10 examples and 4 nonexamples. The unmatched exercise was constructed by omitting the matched example or nonexample, as appropriate. One of the nonexamples illustrated operant conditioning and one illustrated pseudoconditioning. The design of the unmatched condition followed Miller and Weaver's (1976) concept-teaching recommendations, which designate that 30% of the items should be nonexamples. The comparison of the matched and unmatched treatments was included because the matched treatment permitted students to see 40% more diagrams than the unmatched treatment, enhancing any possible effects of diagrams.
In the diagram condition, the introductory lesson and each example in the conceptual exercise contained a diagram representing the classical conditioning. In the nondiagram condition, the diagram was omitted. Figure 1 represents the diagram used in the introductory lesson. The remaining diagrams were similar to Figure 1, differing only in the specific stimuli and responses contained in the example. As illustrated in Figure 1, the lesson emphasized the predictiveness of the CS in signaling the US as being the key factor in conditioning rather than temporal contiguity contiguity /con·ti·gu·i·ty/ (kon?ti-gu´i-te) contact or close proximity.
The state of being contiguous. between the CS and the US.
Students were randomly assigned to one of the four conditions (15 participants per condition), formed by crossing lesson format (matched examples and nonexamples versus unmatched examples and nonexamples) with diagrams (present versus absent).
Two independent raters scored all the posttests. For the classification task, the raters scored each item as correct or incorrect on the basis of whether the student had properly identified examples and nonexamples. For analysis-task examples, the raters individually scored the correctness of the student's identification of the US, UR, NS, CS, and CR. For analysis-task nonexamples, the raters scored the correctness of the student's identification of nonexamples as instances of operant conditioning, of pseudoconditioning, or of neither operant conditioning nor pseudoconditioning. If the first two raters disagreed, a third rater cast the deciding vote.
Interrater reliability was calculated by dividing the number of agreements by the number of agreements plus the number of disagreements. On the classification task, reliability of posttest grading was .99. On the analysis task, the reliability coefficient was 1.00 for nonexamples. On the analysis task, the mean reliability coefficient for examples ranged from .95 to .97 with an overall mean of .96
Results and Discussion
A 2 x 2 x 2 x 2 (diagrams/nondiagrams x matched/unmatched format x examples/nonexamples x classification/analysis) repeated-measures analysis of variance was conducted on the posttest data. The proportion of responses correct per item served as the dependent measure for purposes of data analysis. On the posttest, the students answered nonexample items (M = .76 correct responses per item) correctly more often than example items (M = .65 correct responses per item), F(1, 56) = 6.90, p < .05. In addition, the students correctly responded more often to the classification task (M = .825) than to the analysis task (M = .59), F(1, 56) = 112.56, p <.01.
The major significant result was the diagrams/nondiagrams x examples/nonexamples x classification/analysis interaction, F(1, 56) = 8.86, p < .01. Newman-Keuls tests indicated that students using diagrams correctly classified nonexamples (M = .93) more frequently than did students not using diagrams (M = .83), p < .01. In addition, students using diagrams correctly analyzed examples (M = .61) more frequently than did students not using diagrams (M = .49), p < .01.
Of the 30 students in the diagram group, 15 (drawers) happened to draw their own diagrams in answering the posttest example items and the other 15 (nondrawers) did not. In order to examine differences between these two groups, a 2 x 2 x 2 (drawers/nondrawers x examples/nonexamples x classification/analysis) repeated-measures analysis of variance was conducted. Students correctly answered classification-task items (M = .84) more often than they correctly answered the analysis-task items (M = .62), F(1,28) = 4.25, p < .05.
Drawers answered posttest items (M = .80) correctly more often than did nondrawers (M = .66), F(1, 28) = 6.30, p < .05. Also significant in this analysis was the drawers/nondrawers x examples/nonexamples x classification/analysis interaction, F(1, 28) = 8.93, p < .01. Newman-Keuls tests indicated that drawers correctly classified examples (M = .82) more often than did nondrawers (M = .69), p < .05. Drawers classified nonexamples correctly (M = 1.00) more often than did nondrawers (M = .87), p < .01. Finally, drawers correctly analyzed examples (M = .75) more often than did nondrawers (M = .47), p < .01.
To summarize, the major finding of this study was that diagrams of classical conditioning improved students' learning of that concept. On the posttest, students given diagrams (a) more often correctly classified novel nonexamples of classical conditioning and (b) more often correctly. analyzed novel examples of classical conditioning by being able to correctly identify the US, UR, NS, CS, and CR.
The beneficial effects of diagrams provide general support for advocates of the use of diagrams to teach behavior-analysis concepts (Goldwater & Acker, 1995; Malott, 1992; Mattaini, 1995; Michael & Schafer, 1995). Although there are differences in the proposed systems of diagrammatic representation, all the systems share the representation of temporal sequences of events in spatial dimensions on the printed page. Students in the present study benefited from relatively simple diagrams that illustrated, in spatial dimensions, the temporal sequences among the US, UR, CS and CR. Behavior-analysis procedures generally involve temporal sequences of events (e.g., the effects of antecedents and consequences on responses) and it may be that diagrams make these sequences easier to understand and to learn by representing them in spatial dimensions. In comparison to the systems for diagramming behavior analysis concepts (Goldwater & Acker, 1995; Hummel et al., 1994; Malott, 1992; Mechner, 1959; Snapper et al., 1982), the diagrams the students used in the present study were simple ones that required no specific knowledge of diagramming symbols. Students may stand to benefit even more from learning more sophisticated diagramming systems.
The diagrams used in the present study provide support for the general proposition that diagrams illustrating concept structure are a useful adjunct to concept teaching methods. Waddill, McDaniel, and Einstein (1988) suggest that research concerning the instructional effectiveness of visual aids should identify the specific contexts and conditions that are appropriate for presenting visual aids. The present study suggests that diagrams are appropriate at the definitional and example analysis stages of concept teaching. This finding is especially important because research and advice in concept teaching (Grant, 1986; Merrill, Tennyson, & Posey, 1992; Tennyson & Cocchiarella, 1986) has generally emphasized only standard prose concept definitions. It may be that word diagrams should often augment the use of standard prose in teaching concepts.
The beneficial effect of diagrams on the example-analysis task indicated that diagrammatic representation was particularly effective in teaching students to identify the relationships among the stimuli and responses in classical conditioning. The finding that diagrams improved nonexample classification indicates that diagrams were also particularly effective in reducing errors of overgeneralization or overextension overextension
extension beyond the normal limit for a joint, commonly causing sprain of its ligaments. , in which nonexamples of a concept are classified as examples.
The diagrams were of no help in the nonexample-analysis task, in which the students were required to specify whether nonexamples were instances of operant conditioning or pseudoconditioning. Because the diagrams did not represent these concepts, the diagrams could not be expected to improve student performance in analyzing nonexamples. However, the diagrams were also of no help in classifying examples, and they should have been of assistance if the diagrams had acted to improve the students' abilities to identify the components of classical conditioning. Students in the diagram group who drew diagrams in answering the posttest items were better at example classification than were diagram group students who did not draw diagrams. This result suggests that diagrams improved both example and nonexample classification for those students who actually made use of the diagrams. Although suggestive of suggestive of Decision making adjective Referring to a pattern by LM or imaging, that the interpreter associates with a particular–usually malignant lesion. See Aunt Millie approach, Defensive medicine. additional benefits of diagrams, the correlational nature of the comparisons between the drawers and nondrawers makes it difficult to come to any firm conclusions. For example the drawers could have simply been more motivated, which led them to draw diagrams and do better on the posttest, without any necessary functional relationship between drawing and improved test performance.
The effectiveness of diagrams leads naturally to considerations concerning how teachers can and should implement diagrams in their written instructional materials, lectures, and web-based materials. Although this issue is beyond the scope of the present data, some rough guidelines have emerged from the author's work. First, because diagrams tend to focus the student's attention, using diagrams may be more effective to teach relatively difficult concepts like classical conditioning that students need to spend more time on than simpler concepts. Second, many diagrams illustrate temporal sequences of events and these types of diagrams lend themselves well to use in overheads in lectures because the instructor is able to point out subsections of the diagram and explain the component processes the diagram illustrates. This kind of step-by-step highlighting of parts of diagrams is also increasingly possible through computer-based and web-based instruction. Diagrams are particularly well suited to web-based instruct ion because they are relatively easy to read on a computer screen, unlike long passages of text. Third, as suggested by the comparisons of the drawers and nondrawers in the current study, there may well be important benefits in teaching and encouraging students to draw diagrams. Diagramming methods may enable students to organize and recall concepts and principles more effectively than traditional methods such as reading and rereading and reciting text.
ANDERSON, R. C., & KULHAVY, R. W. (1972). Learning concepts from definitions. American Educational Research Journal, 9, 385-390.
BIERLEY, C., MCSWEENEY, F. K., & VANNIEUWKERK, R. (1985). Classical conditioning of preferences for stimuli. Journal of Consumer Research, 12, 316-323.
CANNON, D. S., & BAKER, T. B. (1981). Emetic emetic (əmĕt`ĭk), substance that produces vomiting. Direct, or gastric, emetics, which act directly on the stomach, include syrup of ipecac, sulfate of zinc or copper, alum, ammonium carbonate, mustard in water, or copious quantities of and electric shock alcohol aversion therapy aversion therapy
A type of behavior therapy designed to modify antisocial habits or addictions by creating a strong association with a disagreeable or painful stimulus. : Assessment of conditioning. Journal of Consulting and Clinical Psychology The Journal of Consulting and Clinical Psychology (JCCP) is a bimonthly psychology journal of the American Psychological Association. Its focus is on treatment and prevention in all areas of clinical and clinical-health psychology and especially on topics that appeal to a broad , 49, 20-33.
CANNON, D. S., BEST, M. R., BATSON, J. D., & FELDMAN, M. (1983). Taste familiarity and apomorphine-induced taste aversions in humans. Behavior Research and Therapy, 21, 669-673.
CLARKE, J. C., & HAYES, K. (1984). Covert sensitization cov·ert sensitization
Aversive conditioning during which an individual is taught to imagine unpleasant or aversive consequences while engaging in an unwanted habit. , stimulus relevance and the equipotentiality premise. Behavior Research and Therapy, 22, 451-454.
DEKKER, E., PELSER, H. E., & GROEN, J. (1964). Conditioning as a cause of asthmatic attacks; a laboratory study. In C. M. Franks (Ed.), Conditioning techniques in clinical practice and research (pp. 116-131). New York New York, state, United States
New York, Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and the Canadian province of : Springer.
DOERR, H. O. (1981). Cognitive derivation of generalization stimuli: Separation of components. Bulletin of the Psychonomic Society, 17, 73-75.
EFRON, R. (1964). The conditioned inhibition of uncinate uncinate /un·ci·nate/ (un´si-nat)
1. shaped like a hook.
2. relating to or affecting the uncinate gyrus.
1. fits. In C. M. Franks (Ed.), Conditioning techniques in clinical practice and research (pp. 132143). New York: Springer.
ELLSON, D. G. (1941). Hallucinations produced by sensory conditioning. Journal of Experimental Psychology, 28, 1-20.
GOLDWATER, B. C., & ACKER, L. E. (1995). A descriptive notation system for contingency diagramming in behavior analysis. The Behavior Analyst, 18, 113-121.
GRANT, L. (1986). Categorizing and concept learning. In H. W. Reese & L. J. Parrott (Eds.), Behavior science: Philosophical, methodological, and empirical advances (pp. 139-1 62). Hillsdale, NJ: Lawrence Erlbaum.
GRANT, L. (1996). Positive reinforcement positive reinforcement,
n a technique used to encourage a desirable behavior. Also called
positive feedback, in which the patient or subject receives encouraging and favorable communication from another person. A self-instructional exercise. [WWW WWW or W3: see World Wide Web.
(World Wide Web) The common host name for a Web server. The "www-dot" prefix on Web addresses is widely used to provide a recognizable way of identifying a Web site. Document]. URL URL
in full Uniform Resource Locator
Address of a resource on the Internet. The resource can be any type of file stored on a server, such as a Web page, a text file, a graphics file, or an application program. http://server.bmod.athabascau.ca/html/prtut/reinpair.htm
GRANT, L., & EVANS, A. E. (1994). Principles of behavior analysis. New York: Harper-Collins.
GRANT, L., MCAVOY, R., & KEENAN, J. B. (1982). Prompting and feedback variables in concept programming. Teaching of Psychology, 9, 173-177.
HAYDUK, A. W. (1980). Increasing hand efficiency at cold temperatures by training hand vasodilation vasodilation /vaso·di·la·tion/ (-di-la´shun)
1. increase in caliber of blood vessels.
2. a state of increased caliber of blood vessels. with a classical conditioning-biofeedback overlap design. Biofeedback biofeedback, method for learning to increase one's ability to control biological responses, such as blood pressure, muscle tension, and heart rate. Sophisticated instruments are often used to measure physiological responses and make them apparent to the patient, who and Self-Regulation, 5, 307-326.
HUMMEL, J. H., KAECK, D. J., BOWES, R. L., & RITTENHOUSE, R. D. (1994). Diagramming operant operant /op·er·ant/ (op´er-ant) in psychology, any response that is not elicited by specific external stimuli but that recurs at a given rate in a particular set of circumstances.
adj. processes. The ABA Newsletter, 17, 4-5.
JOHNSON, D. M., & STRATTON, R. P. (1966). Evaluation of five methods of teaching concepts. Journal of Educational Psychology, 57, 48-53.
KASATKIN, N. I., & LEVIKOVA, A. M. (1932). On the development of early conditioned reflexes and differentiations of auditory stimuli auditory stimuli,
n.pl in dentistry, the irregularities or deposits on the surface of a tooth that may be detected by ear of both patient and clinician during examination and probing. in infants. Journal of Experimental Psychology, 18, 1-19.
KELLER, F S., & SCHOENFELD, W. S. (1950). Principles of psychology The Principles of Psychology is a monumental text in the history of psychology, written by William James and published in 1890.
There were four methods in James' psychology: analysis (i.e. . New York: Appleton-Century-Crofts.
MALOTT, R.W. (1992). Saving the world with contingency diagramming. The ABA Newsletter, 15, 45.
MATTAINI, M. A. (1995). Contingency diagrams as teaching tools. The Behavior Analyst, 18, 93-98.
MCCONAGHY, N. (1970). Penile penile /pe·nile/ (pe´nil) of or pertaining to the penis.
Of or relating to the penis.
of or pertaining to the penis. response conditioning and its relationship to aversion therapy in homosexuals. Behavior Therapy behavior therapy or behavior modification, in psychology, treatment of human behavioral disorders through the reinforcement of acceptable behavior and suppression of undesirable behavior. , 1, 213-221.
MECHNER, F (1959). A notational system for description of behavioral processes. Journal of the Experimental Analysis of Behavior The experimental analysis of behavior is the name given to school of psychology founded by B. F. Skinner, and based on his philosophy of radical behaviorism. A central principle was the inductive, data-driven , 2, 133-150.
MERRILL, M. D., TENNYSON, R. D., & POSEY, L. O. (1992). Teaching concepts: An instructional design guide. Englewood Cliffs, NJ: Educational Technology Publications.
MICHAEL, J., & SHAFER, E. (1995). State notation for teaching about behavioral procedures. The Behavior Analyst, 18, 123-140.
MILLER, L. K. (1980). Principles of everyday behavior analysis (2nd ed.). Monterey, CA: Brooks/Cole.
MILLER, L. K., & WEAVER, F H. (1976). A behavioral technology for producing concept formation in university students. Journal of Applied Behavior Analysis The Journal of Applied Behavior Analysis (JABA) was established in 1968 as a The Journal of Applied Behavior Analysis is a peer-reviewed, psychology journal, that publishes research about applications of the experimental analysis of behavior to problems of social importance. , 9, 289-300.
PETERSON, N. (1978). An introduction to verbal behavior. Grand Rapids, MI: Behavior Associates.
QUARTI, C., & RENAUD, J. (1964). A new treatment of constipation by conditioning: A preliminary report. In C. M. Franks (Ed.), Conditioning techniques in clinical practice and research (pp. 219-227). New York: Springer.
RAZRAN, G. (1949). Sentential and propositional generalization of salivary sal·i·var·y
1. Of, relating to, or producing saliva.
2. Of or relating to a salivary gland.
pertaining to the saliva. conditioning to verbal stimuli. Science, 109, 447-448.
REESE, D. G., & WOOLFENDEN, R. M. (1973). Behavior analysis of everyday life: A program for the generalization of behavioral concepts. Kalamazoo, MI: Behaviordelia.
SNAPPER, A. G., KADDEN, R. M., & INGLIS, G. B. (1982). State notation of behavioral procedures. Behavior Research Methods and Instrumentation, 14, 329-342.
SWITZER, S. A. (1933). Disinhibition dis·in·hi·bi·tion
1. A loss of inhibition, as through the influence of drugs or alcohol.
2. A temporary loss of an inhibition caused by an unrelated stimulus, such as a loud noise. of the conditioned galvanic skin response gal·van·ic skin response
n. Abbr. GSR
A measure of electrical resistance as a reflection of changes in emotional arousal, taken by attaching electrodes to any part of the skin and recording changes in moment-to-moment perspiration and . Journal of General Psychology, 9, 77-100.
TENNYSON, R. D., & COCCHIARELLA, M. J. (1986). An empirically based instructional design theory for teaching concepts. Review of Educational Research, 56, 40-71.
TENNYSON, R. D., STEVE, M. W., & BOUTWELL, R. E. (1975). Instance sequence and analysis of instance attribute representation in concept acquisition. Journal of Educational Psychology, 67, 821-827.
VAITL, D., GRUPPE, H., & KIMMEL, H. D. (1985). Contextual stimulus control of conditional vasomotor vasomotor /vaso·mo·tor/ (-mo´tor)
1. affecting the caliber of blood vessels.
2. a vasomotor agent or nerve.
adj. and electrodermal electrodermal /elec·tro·der·mal/ (e-lek?tro-der´m'l) pertaining to the electrical properties of the skin, especially to changes in its resistance.
adj. reactions to angry and friendly faces. The Pavlovian Journal of Biological Science, 20, 124-131.
WADDILL, P. J., MCDANIEL, M. A., & EINSTEIN, G. O. (1988). Illustrations as adjuncts to prose: A text-appropriate processing approach. Journal of Educational Psychology, 80, 457-464.
WATSON, J. B., & RAYNER, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3, 1-20.
WOLPE, J. (1958). Psychotherapy by reciprocal inhibition reciprocal inhibition (rē·siˑ·pr·k . Stanford, CA: Stanford University Press.
Please address requests for reprints and other correspondence about this article to Lyle K. Grant, Psychology Centre, Athabasca University, Athabasca, Alberta, Canada T9S 3A3. | http://www.thefreelibrary.com/Word+diagrams+in+teaching+classical+conditioning.-a086045742 | 13 |
476 | The Comprehensible Philosophy Dictionary
© 2010-2013 James Wallace Gray (Last updated 1/13/2013)
You can download a PDF copy of this dictionary here.
This dictionary is an attempt to comprehensively define all of the most important philosophy terms in a way that could be understood by anyone without requiring an extensive philosophical education. Examples are often discussed to help make the meaning of terms clear.
This list includes critical thinking concepts, and many of those should be understood by everyone to improve rational thought. Many of these concepts are important distinctions made by philosophers to help us attain nuanced thoughts. For example, David Hume introduced us to the concept of “matters of fact” and “relations of ideas.” It will often be said that a term can be contrasted with another when doing so can help us make certain distinctions.
Sometimes a term can be best understood in the context of other terms. They are related. For example, understanding “formal logic” can help us better understand “logical connectives.”
Note that multiple definitions are often given for a term. In that case the definitions are separated by numbers and we should keep in mind that we should try not to confuse the various definitions the terms can have. For example, philosophers use the word ‘argument’ to refer to an attempt at rational persuasion, but other people use the word to refer to hostile disagreement. See “ambiguity” and “equivocation” for more information.
a fortiori – Latin for “from the stronger thing.” A conclusion is true a fortiori if a premise makes it trivially true. For example, “All men are mortal, a fortiori, Socrates is mortal.”
a posteriori – Latin for “from the later.” A posteriori propositions or beliefs are justified entirely by observation. An example of an a posteriori proposition is “human beings are mammals.” “A posteriori” is the opposite of “a priori.”
a priori – Latin for “from the earlier.” A priori propositions or beliefs are justified (at least in part) by something other than observation. Many philosophers agree that propositions that are true by definition have an a priori justification. An example of an a priori proposition is “all bachelors are unmarried.” “A priori” is the opposite of “a posteriori.”
A-type proposition – A proposition with the form “all a are b.” For example, “all cats are animals.”
abduction – A form of reasoning that consists of trying to know what is likely true by examining the possible explanations for various phenomena. Abductive arguments are not necessarily deductive arguments, but they provide some support for the conclusion. The “argument to the best explanation” is an example of abductive reasoning. For example, we can often infer that a neighbor is probably home when we see a light turn on at her house because it’s often the best explanation.
abductive reasoning – A synonym for “abduction.”
The Absolute – A term for “God” or “the Good.”
absolute truth – Something true for all time no matter what situation is involved. A plausible example is the law of non-contradiction. (Statements can’t be true and false at the same time.)
abstract entities – Things that are not physical objects or states of mind. Instead, they exist outside space and time. For example, there are mathematical realists who think that numbers are abstract entities that exist apart from our opinions about them, and there are factual statements concerning how numbers relate. See “Platonic Forms” for more information.
abstraction – To conceptually separate various elements of concrete reality. For example, to identify an essential characteristic of human beings as the ability to reason would require us to abstract away various elements of human beings that we describe as “the ability to be rational.”
abstractism – The view that something is necessary insofar as it’s true of every consistent set of statements, and something is possible insofar as it’s true in at least one consistent set of statements. It’s necessary that oxygen is O2 insofar as it’s true that oxygen is O2 in every consistent set of statements, and it’s possible for a person to jump over a small rock insofar as at least one consistent set of statements has a person jump over a small rock. Abstractism could be considered to advocate the existence of “abstract entities” insofar as the existence of a consistent set of statements could be considered to be factual as an abstract entity.
absurdism – The view that it is absurd for people to try to find the meaning of life because it’s impossible to do so.
absurdity – (1) The property of contradicting our knowledge or of being logically impossible. For example, it is absurd to think knowledge is impossible insofar as we know that “1+1=2.” See “reductio ad absurdum” for more information. (2) “Absurd” is sometimes equated with “counterintuitive.” (3) An irreconcilable interest people have, or a search for knowledge that can’t be completed. For example, it is sometimes said that it’s absurd for people to search for an ultimate foundation for value (or the meaning of life) even though we can never find an ultimate foundation for value. (4) In ordinary language, “absurdity” often means “utterly strange.”
accessibility – (1) The relevant domain used to determine if something is necessary or possible. It is thought that something is necessary if it “has to be true” for all of the relevant domain, and something is possible if it is true of at least one thing within the relevant domain. For example, some philosophers believe that it’s possible for people to exist because they exist in at least one accessible possible world—the one we live in. See “accessible world,” “possible world,” “truth conditions,” and “modality” for more information. (2) In ordinary language, “accessibility” refers to the ability to have contact with something. For example, people in jail have access to food and water; and citizens of the United States have access to move to any city located in the United States.
accessible world – A world that is relevant to our world when we want to determine if something is necessary or possible. For example, we could say that something is necessary if it’s true of all accessible worlds. Perhaps it’s necessary that contradictions are impossible because it’s true of all accessible worlds. An accessible world is not necessarily a world we can actually go to. It could exist outside our universe or only exist conceptually. See “possible world,” “truth conditions,” and “modality” for more information.
accidental characteristic – A characteristic that could be changed without changing what something is. For example, an accidental characteristic of Socrates was his pug nose—he would still be a person (and Socrates) without having a pug nose. “Accidental characteristics” are the opposite of “essential characteristics.”
accidentalism – The metaphysical view that not every event has a cause and that chance or randomness is a factor that determines what happens in the universe. Many philosophers think that quantum mechanics is evidence of accidentalism. Accidentalism requires that we reject “determinism.”
acosmism – The view that the universe is illusory and god is the ultimate reality.
act utilitarianism – A consequentialist theory that claims that we should strive to maximize goodness (positive value) and minimize harm (negative value) by considering the results of our actions. The situation is very important to knowing what we should do. For example, it is generally wrong to hurt people, but it might sometimes be necessary or “morally right” to hurt others to protect ourselves. “Act utilitarianism” can be contrasted to “rule utilitarianism.”
ad hoc – A Latin phrase that literally means “for this.” It refers to solutions that are non-generalizable and only used for one situation. For example, ad hoc hypotheses are designed to save hypotheses and theories from being falsified. Some scientists might think dark energy is an ad hoc hypothesis because it is used to explain nothing other than why the universe is expanding at an increasing rate, which contradicts our understanding of physics.
ad hominem – A Latin phrase that literally means “to the person.” It refers to insults, and usually to fallacious forms of reasoning that make use of insults or disparaging remarks. For example, we could respond to the a doctor’s claim that “smoking is unhealthy” by saying the doctor who made the argument drinks too much alcohol.
ad infinitum – Latin for “to infinity” or “forevermore.” It can also be translated as “on and on forever.”
addition – A rule of inference that states that we can use “a” as a premise to validly conclude “a and/or b.” For example, “Dogs are mammals. Therefore, Dogs are mammals and/or lizards.”
æon – Latin for “life,” “age,” or “for eternity.” Plato used this term to refer to the eternal world of the Forms.
aesthetics – The philosophical study of beauty and art. For example, some philosophers argue that beauty is an objective property of things, but others believe that it’s subjective and might say, “Beauty is in the eye of the beholder.”
affirmative categorical proposition – A synonym for “positive categorical proposition.”
affirmative conclusion – A categorical proposition used as a conclusion with the form “all a are b” or “some a are b.” For example, “some animals are mammals.”
affirmative premise – A categorical proposition used as a premise that has form “all a are b” or “some a are b.” For example, “some mammals are dogs.”
affirming the disjunct – A fallacy committed by an argument that requires us to mistakenly assume two propositions to be mutually exclusive and reject one proposition just because the other is true. The argument form of an argument that commits this fallacy is “Either a or b. a. Therefore, not-b.” For example, consider the following argument—“Either Dogs are mammals or animals. Dogs are mammals. Therefore, dogs are not animals.”
affirming the consequent – An invalid argument with the form “if a, then b; b; therefore, a.” For example, “If all dogs are reptiles, then all dogs are animals. All dogs are animals. Therefore, all dogs are reptiles.”
agency – The ability of a fictional or real person to act in the world.
agent – A fictional or real person who has agency (the ability to act in the world).
agent causation – A type of causation that’s neither determined nor random produced by choices made by people. Agent causation occurs from an action caused by a person that’s not caused by other events or states of affairs. For example, it’s not caused by the reasoning of the agent. See “prime mover” and “libertarian free will” for more information.
agent-neutral reasons – A reason for action that is not dependent on the person who will make a decision. For example, everyone could be said to have a reason to find a cure for cancer because it would save lives. The assumption is that there is a reason to find a cure cancer does not depend on unique motivations or duties of an individual (and perhaps saving lives is good for its own sake). Classical utilitarianism is an agent-neutral ethical theory because it it claims that all ethical reasons to act concern whatever has the most valuable consequences. “Agent-neutral reasons” are often contrasted with “agent-relative reasons.”
agent-relative reason – A reason for action that is dependent on the person involved. For example, a person has a reason to give money to a friend in need because she cares for the friend. Ethical egoism is an agent-relative theory that claims that the only reasons to act are agent-relative. “Agent-relative reasons” are often contrasted with “agent-neutral reasons.”
agnosticism – The view that we can’t (currently) know if gods exist or not.
Agrippa’s trilemma – A synonym for “Münchhausen trilemma.”
akrasia – Greek for “lacking power” and often translated as “weakness of will.”
alethic – Latin for “species.”
alethic logic – A formal logical system with modal operators for “possible” (◊) and “necessary” (□.)
alethic modality – The distinction between “possibility” and “necessity” used within formal logical systems.
The All – A term for “the absolute,” “God,” or “the Good.”
algorithm – A step-by-step procedure.
alternate possibilities – Events that could happen in the future or could have happened in the past instead of what actually happened. Alternate possibilities are often mentioned to refer to the ability to do otherwise. For example, some people think free will and moral responsibility require alternate possibilities. Let’s assume that’s the case. If Elizabeth is morally responsible for killing George, then she had an alternate possibility of not killing George. If she was forced to kill George, then she isn’t morally responsible for doing it. Alternate possibilities are often thought to be incompatible with determinism.
altruism – Actions that benefit others without an overriding concern for self-interest. Altruism does not require self-sacrifice but altruistic acts do require that one does not expect to attain benefits in proportion to (or greater than) those given to others.
ambiguity – Statements, phrases, or words that can have more than one meaning. For example, the word ‘argument’ can refer to an unpleasant exchange of words or as a series of statements meant to give us a reason to believe a conclusion. “Ambiguity” can be contrasted with “vagueness.”
amor fati – Latin for “love of fate.” To value everything that happens and see it as good. Suffering and death could seen as being for a greater good, or at least a positive attitude might help one benefit from one’s own suffering. For example, Friedrich Nietzsche’s aphorism, “what doesn’t kill us makes us stronger” refers to the view that a positive attitude can help us benefit from our suffering.
amoral – Lacking an interest in morality. For example, an amoral person doesn’t care about what’s morally right or wrong, and a person acts amorally when she doesn’t care about morality at that moment in time. Many people think that babies and nonhuman animals act amorally because they have no concept of right or wrong. “Amoral” can be contrasted with “nonmoral.”
amphibology – A synonym for “amphiboly.”
amphiboly – A fallacious argument that requires an ambiguity based on the grammar of a statement. For example, “men often marry women, but they aren’t always ready for marriage.” In this case the word ‘they’ could refer to the men, the women, or both. An example of the amhiboly fallacy is the following argument—“If people feed dogs chocolate, then they will get hurt. You don’t want to get hurt. Therefore, you shouldn’t feed dogs chocolate.” In this case feeding dogs chocolate actually hurts dogs, not people. The argument requires us to falsely think that people get hurt by feeding dogs chocolate.
analogical reasoning – Reasoning using analogies that can be explicitly described as an “argument from analogy.”
analogy – (1) A comparison between two different things that draws similarities between two things. For example, punching and kicking people are both analogous in the sense that they are both generally wrong for the same reason (i.e. they are performed to hurt people). (2) An “argument from analogy.”
analytic – Analytic propositions or beliefs that are true because of their meaning. An example of an analytic proposition is “all bachelors are unmarried.” “Analytic” is the opposite of “synthetic.”
analytic philosophy – A domain of philosophy that’s primarily concerned with justifying beliefs as much as possible with a great deal of clarity and precision. However, the issues analytic philosophers deal with generally involve more speculation and less certainty than the issues natural scientists tend to deal with. “Analytic philosophy” is often contrasted with “continental philosophy.”
anarchism – The view that we should eliminate states, governments, and/or political rulers.
anecdotal evidence – (1) To attempt to persuade people to agree to a conclusion based on the experiences of an individual or even many individuals. Anecdotal evidence is often a fallacious type of argumentation. For example, many individuals could have experiences of winning sports games while wearing a four-leaf clover, but that doesn’t prove that four-leaf clovers actually give sports players luck. No fallacy is committed when the experiences of people are sufficient to give evidence for a causal relation and mere correlation can be ruled out. Fallacious appeals to anecdotal evidence could be considered to be a form of the “hasty generalization” fallacy. Also relevant is the “cum hoc ergo propter hoc” fallacy. (2) The experiences of a person that could be considered to be a reason to agree with some belief. For example, our experience of not getting cavities and brushing our teeth every day is at least superficial evidence that brushing our teeth could help us avoid getting cavities.
and/or – See “inclusive or.”
antecedent – (1) The first part or what happens first. (2) The first part of a conditional with the form “if a, then b.” (“a” is the antecedent). For example, consider the following conditional—“If it rains tomorrow, then we won’t have to water the lawn.” In this case the antecedent is “it rains tomorrow.”
anti-realism – The view that some domain is nonfactual (not part of reality) other than perhaps how it relates to social construction or convention. For example, “moral anti-realists” think that there are no moral facts, but perhaps we can talk about moral truth insofar as some statements conform to a social contract. “Anti-realism” is often contrasted with “realism.”
antinomy – A real or apparent contradiction between laws or rational beliefs. For example, Immanuel Kant argues that time must have a beginning because infinite events can’t happen in the past, but time can’t have a beginning because that would imply that there was a moment before time began. “Antinomies” are sometimes equated with “paradoxes.”
antithesis – The opposition to a thesis, generally within a dialectical process. Objections are antitheses found in argumentative essays; and the flaws of a political system that lead to less freedom could be considered to be the antitheses found in a Hegelian dialectic.
anomaly – A phenomenon that can’t yet be explained by science and could be taken to be evidence against a scientific theory. Anomalies are often explained sooner or later, but sometimes they can’t be explained because our observations of the facts are simply incompatible with the theory we assume to be true. For example, Mercury didn’t move around the Sun in the way we predicted based on Newton’s theory of physics, but it did move in the way a superior theory predicted (Einstein’s theory of physics).
anthropic principle – The view that the universe and observations of the universe must be compatible with the conscious beings that make the observations. For example, the laws of physics must be compatible with the existence of people or we can’t exist in the fist place. (If the universe was incompatible with our existence, then we wouldn’t be here.)
anthropocentrism – The view that human beings are the center of the universe or the most important thing. For example, the view that we should have harmful experimentation using nonhuman animals when it saves human lives could be said to be anthropocentric.
anthropomorphism – To view nonhuman things as having human qualities or to present such things as if they had human qualities that they don’t. For example, we might say that computers “figure out” how to do math problems, but computers don’t actually think or figure things out.
appeal to authority – (1) An argument that gives evidence for a belief by referencing expert opinion. Appeals to authority are not fallacious as long as they actually appeal to the unanimous opinion of experts of the relevant kind. (2) A fallacious type of argument that appeals to the supposed expert opinion of others when the opinion referred to is controversial among the experts; or when the supposed expert that is appealed to is not an expert of the relevant kind.
appeal to consequences – A type of fallacy committed by arguments that conclude that something is true or false based on the effects the belief will have. For example, “We know it’s true that every poor person can become rich because poor people who believe they can become rich are more likely to become rich.”
appeal to emotion – To attempt to persuade people that something is true by appealing to their pity, by causing fear, or by appealing to some other emotion. For example, someone could argue that war is immoral by appealing to our pity of wounded innocent children. The harm done to the children might be relevant to why war is wrong, but it is not sufficient to prove that war is always wrong.
appeal to force – A fallacious form of persuasion that is committed when coercion is used to get people to pretend to agree with a conclusion, or in order to suppress opposing viewpoints. The appeal to force can be subtly used in an academic setting when certain views are taboo and could harm a person’s future employment opportunities. However, sometimes people also fear being punished for expressing their “heretical views.” For example, John Adams passed the Sedition Act, which imposed fines and jail penalties for anyone who spoke out against the government. Additionally, various heresies (taboo religious beliefs) have been punishable by death in various places and times.
appeal to ignorance – A fallacious argument that concludes something on the basis of what we don’t know. For example, to claim that “we should agree that extraterrestrials don’t exist because we can’t yet prove they exist” is fallacious because there are other reasons we might expect extraterrestrials to exist, such as the vastness of the universe.
appeal to nature – See “naturalistic fallacy.”
appeal to popularity – A fallacy committed by an argument that concludes something on the basis of popular opinion. The appeal to popularity is often persuasive because of a common bias people have in favor of popular opinions. Also known as the “bandwagon fallacy.”
appeal to probability – A fallacy committed by an argument that concludes that something will happen just because it might happen. For example, “It’s possible to make a profit by gambling. Therefore, I will eventually make a profit if I keep playing the slot machines.”
applied ethics – Ethical philosophy that’s primarily concerned with determining what course of action is right or wrong given various moral issues, such as euthanasia, capital punishment, abortion, and same-sex marriage.
apperception – To have attention or to be aware of an object as being something other than oneself. See “empirical apperception” and “transcendental apperception” for more information.
arbitrary – Something said or done without a reason. For example, the initial words we use for our concepts are arbitrary. We could have called bananas ‘gordoes’ and there was no reason to prefer to call them ‘bananas’ instead. However, the meaning of words is not arbitrary after the definitions are justified by common usage.
argument – (1) To provide statements and evidence in an attempt to lead to the plausibility of a particular conclusion. For example, “punching people is generally wrong because hurting people is generally wrong” is an argument. (2) In mathematics and predicate logic, “argument” is sometimes a synonym for “operands.” (3) In ordinary language, “argument” often refers to a verbal battle, a hostile disagreement, or a discussion that concerns a disagreement.
argument by consensus – A synonym for “appeal to popularity.”
argument diagram – A visual representation of an argument that makes it clear how premises are used to support a conclusion. Argument diagrams generally have numbers written in circles, and each number is used to represent a statement. Consider the following argument—“ Socrates is a human. All humans are mammals. All mammals are mortal. Therefore, Socrates is mortal.” An example of an argument diagram that can be used to represent this argument is the following:
argument from analogy – An argument that uses an analogy. For example, we could argue that kicking and punching people are both generally wrong because they’re both analogous—they both are generally wrong for the same reason (because they’re both performed to hurt people and it’s generally wrong to try to hurt people). Not all arguments using analogies are well-reasoned. See “weak analogy” for more information.
argument form – See “logical form.”
argument from absurdity – A synonym for “reductio ad absurdum.”
argument from fallacy – A synonym for “argumentum ad logicam.”
argument indicator – A term used to help people identify that an argument is being presented. Argument indicators are premise indicators or conclusion indicators. For example, ‘because’ is an argument indicator used to state a premise. See “argument” for more information.
argument map – A visual representation of an argument that makes it clear how premises are used to support a conclusion. Argument maps are a type of argument diagram, but the premises and conclusions are usually written in boxes. An example of an argument map is the following:
argument place – (1) In logic, it is the number of things that are predicated by a statement. For example, “Gxy” is a statement with two predicated things, so it has two argument places. (In this case “G” can stand for “attacks.” In that case “Gxy” would mean “x attacks y.”) (2) In mathematics, it’s the number of things that are involved with an operation. For example, addition is an operation with two argument places. “2 + 3” has two arguments: “2” and “3.”
argument to the best explanation – An attempt to know what theory, hypothesis, or explanatory belief we should have by comparing various alternatives. The best explanation should be the one that’s the most consistent with our observations (and perhaps exhibit other various theoretical virtues better than the alternatives as well). For example, it’s more plausible that the light turns on at a neighbor’s home because a person turned a light on than that a ghost turned the light on because we don’t know that ghosts exist. See “theoretical virtues” for more information. The argument to the best explanation is a form of “abduction.”
argumentative strategies – The methods we use to form a conclusion from premises. For example, the “argument from absurdity” and “argument from analogy” are argumentative strategies.
argumentum ad baculum – Latin for “argument from the stick.” See “appeal to force.”
argumentum ad consequentiam – Latin for “argument to consequences.” See “appeal to consequences.”
argumentum ad ignorantiam – Latin for “argument from ignorance.” See “appeal to ignorance.”
argumentum ad logicam – Latin for “argument to logic.” A type of fallacy committed by an argument that claims that a conclusion of an argument is false or unjustified just because the argument given in support of the conclusion is fallacious. A conclusion can be true and justified even if people give fallacious arguments for it. For example, Tom could argue that “the Earth exists because Tina is evil.” This argument is clearly fallacious, but the conclusion (that the Earth exists) is both true and justified.
argumentum ad naturam – Latin for “argument from nature.” See “naturalistic fallacy.”
aristocracy – A political system defined by the exclusive power to rule by an elite group of individuals.
Aristotelian ethics – An ethical system primarily concerned with virtue developed by Aristotle. Aristotle believed that (a) people have a proper function as political rational animals to help each other and use their ability to reason; (b) happiness is the greatest good worth achieving (c) virtues are generally between two extremes; and (d) virtuous people have character traits that cause them to enjoy doing what’s virtuous and to do what’s good thoughtlessly. For example, courage is virtuous because it is neither cowardly nor foolhardy, and courageous people will be willing to risk their life whenever they should do so without a second thought.
arité – Greek for “virtue” or “excellence.”
arity – (1) In logic, it refers to the number of things that are predicated. The statement “Fx” has an arity of one because there’s only one thing being predicated. For example, “F” can stand for “is tall” and in that case “Fx” means “x is tall.” The statement “Gxy” has an arity of two because there’s two things being predicated. For example, “G” could stand for “loves” and in that case “Gxy” means “x loves y.” (2) In mathematics, arity refers to the number of things that are part of an operation. For example, addition requires two numbers. “1+2” is an operation with the following two variables: “1” and “2.”
assertoric – Refers to the property of a domain that people make assertions about. Some philosophers think that moral judgments, such as “stealing is wrong,” are assertoric rather than noncognitive (neither true nor false). Assertoric statements are meant to be true or false depending on whether they accurately correspond to reality or relate properly to facts.
association – A rule of replacement that takes two forms: (a) “a and/or (b and/or c)” means the same thing as “(a and/or b) and/or c.” (b) “a and (b and c)” means the same thing as “(a and b) and c.” (“a,” “b,” and “c” stand for any three propositions.) The parentheses are used to group certain statements together. For example, “dogs are mammals, or they’re fish or reptiles” means the same thing as “dogs are mammals or fish, or they’re reptiles.” The rule of association says that we can replace either of these statements of our argument with the other precisely because they mean the same thing.
association fallacy – A type of fallacy committed by an argument with an unwarranted assumption that two things share a negative quality just because of some irrelevant association. For example, we could argue that eating food is immoral just because Stalin ate food. Also see the “halo effect” and “ad hominem” for more information.
atheism – It generally refers to the view that gods don’t exist. However, it is often divided into the categories of “hard atheism” and “soft atheism.”
atom – (1) The smallest unit of matter that is irreducible and indestructible. (2) In modern science, ‘atom’ refers to a type of particle. Atoms are made with protons and neutrons. The number of protons used to make an atom determines what kind of chemical element it is. For example, hydrogen is only made of a single proton.
attribute – (1) An element or aspect. (2) According to Baruch Spinoza, an attribute is what we perceive of as the essence (or defining characteristic) of a what Descartes considered to be a substance, such as extension (for physicality) and thought (for the psychological part of reality). However, Spinoza rejected that mind and matter are two different substances.
autonomy – To be capable of acting freely based on one’s own judgments.
authentic – (1) To be authentic is to act true to one’s nature, to accept one’s innate freedom, and to refuse to let others make decisions (or think) for us. (2) In Martin Heidegger’s work, the term “for oneself” is often translated as “authentic.”
auxiliary hypothesis – The background assumptions we have during observation and experimentation. It is difficult to know when a scientific theory or hypothesis should be rejected by conflicting evidence because the evidence might actually only conflict with an auxiliary hypothesis. For this reason scientists continue to use the same theories and hypotheses until a better one is developed, and conflicting evidence is known as an “anomaly.” For example, a person could think that the belief that a drug is effective at curing a disease is proven wrong when it doesn’t cure someone’s disease, but the drug might have only been ineffective when the person who takes it doesn’t drink alcohol. In this case the auxiliary hypothesis was that the drug would be effective whether or not people drink alcohol.
axiology – The philosophical study of values.
axiom – A starting assumption prior to argument or debate. Axioms should be rationally defensible and some might be self-evident. For example, the law of non-contradiction is an axiom. If we don’t assume that things can’t be true and false at the same time, then reasoning might not even be possible.
background assumptions – Beliefs that are difficult to discuss or question because they are part of how a person understands the world and they are taken for granted. Background assumptions are often unstated assumptions in arguments similar to how many people skip steps when doing math problems.
bad company fallacy – A synonym for “association fallacy.”
bad faith – To act or believe something inauthentically. To deny one’s innate freedom, or try to let other people make decisions (or think) for us.
bad reasons fallacy – A synonym for “argumentum ad logicam.”
base rate fallacy – A fallacy committed when an argument requires a statistical error based on information about a state of affairs. The most common version of the base rate fallacy is based on the assumption that a test with a high probability of success indicates that the test is accurate. For example, we might assume that a test used to detect a disease that’s 99% accurate will correctly detect that more people have a disease than it will falsely claim have the disease. However, if only 0.1% of the population has the disease, then it will falsely detect around ten times as many people as having the disease than actually have it. See “base rate information” and “false positive” for more information.
base rate information – Information about a state of affairs that is used for diagnosis or statistical analysis. For example, we might find out that 70% of all people with a cough and runny nose have a cold. A doctor is likely to suspect a patient with a cough and runny nose has a cold in consideration of how common colds are. “Base rate information” can be contrasted with “generic information” concerning the frequency of a state of affairs, such as how common a certain disease is.
basic belief – Foundational beliefs that can be known without being justified from an argument (or argument-like reasoning). For example, axioms of logic, such as “everything is identical with itself,” are plausibly basic beliefs. “Basic beliefs” are part of “foundationalism,” and they don’t exist if “coherentism” is true.
basic desire – Something we yearn for or value for its own sake rather than as a means to an end. Pleasure and pain-avoidance are plausibly basic desires. It is possible that we desire food to attain pleasure and avoid pain rather than as a basic desire. “Basic desires” are similar to (and perhaps identical with) “final ends.”
bandwagon fallacy – A synonym for “appeal to popularity.”
Bayesian epistemology – A view of knowledge and justification based on probability. It features a formal apparatus for induction based on deduction and probability calculus. The formal apparatus is used to better understand probabilistic coherence, probabilistic confirmation, and probabilistic inference.
bedeutung – German for “reference.”
begging the question – A logical fallacy that is used when an argument uses a controversial premise to prove a conclusion, and the controversial premise trivially implies that the conclusion is true. For example, “the death penalty is murder, so the death penalty is wrong” requires a controversial premise (that the death penalty is murder) to prove something else controversial (that the death penalty is wrong). Also see “circular reasoning.”
being – (1) Existence, reality, or the ultimate part of reality. Being could be said to be “what is.” The philosophical study of being is “ontology.” (2) “A being” is something unified in space and time that has a mind of its own. For example, people are beings. It is plausible that birds and mammals are also beings in this sense.
belief bias – A cognitive bias that’s defined by the tendency to think that an argument is reasonable just because we think the conclusion is likely true. In reality arguments can be offensively fallacious, even if the conclusion is true. For example, “the sky is blue, so dogs are mammals” has a true conclusion, but it’s offensively fallacious.
biased sample – (1) A sample that is not representative of the group it is meant to represent for the purposes of a study. For example, a poll taken in an area known to mainly vote for Republican politicians that indicates that the Republican presidential candidate is popular with the population at large. It might be the case that the Republican candidate is not the most popular one when all other voters are accounted for, and the sample is so biased that we can’t use it to have any idea about whether or not the Republican candidate is truly popular with the population at large. Also see “selective evidence” and “hasty generalization” for more information. (2) A fallacy committed by an argument based on a biased sample. For example, to conclude that a Republican presidential candidate is popular with the population at large based on a poll taken in a pro-Republican area.
biconditional – A synonym for “material equivalence.”
bifurcation fallacy – A synonym for “false dilemma.”
bioethics – Ethics related to biology. Bioethics is often related to scientific research and technology that has an effect on biological organisms. For example, whether or not cloning human beings is immoral.
bivalent logic – Logic with two truth values: true and false. See “the principle of bivalence” for more information.
black or white fallacy – A synonym for “false dilemma.”
blameworthy – Actions by morally responsible people that fail to meet moral requirements. For example, a morally responsible person who commits murder is blameworthy for that action. See “impermissible” and “responsibility” for more information. “Blameworthy” acts are often contrasted with “praiseworthy” ones.
booby trap – (1) A logical booby trap is a peculiarity of language that makes it likely for people to become confused or to jump to the wrong conclusion. For example, an ambiguous word or statement could make it likely for people to equivocate words in a fallacious way. Some people think all forms of debate are attempts at manipulative persuasion, but there are rational and respectful forms of debate. See “equivocation” for more information. (2) In ordinary language, a booby trap is a hidden mechanism used to cause harm once it is triggered by a certain action or movement. For example, Indiana Jones lifted an artifact from a platform that caused the room to collapse.
borderline case – A state of affairs that can be properly described by a vague term, but it is difficult to say how the vague term can be properly applied. For example, it might not be clear whether or not it’s unhealthy to eat a small bag of potato chips. Even so, we know that eating one potato chip is not unhealthy, and eating a thousand potato chips is unhealthy. See “vague” for more information.
brute facts – (1) Facts that exist that have no explanation. The reason brute facts lack explanations isn’t merely because we are incapable of explaining them. It’s because there is literally no explanation for us to find out about. If brute facts exist, then we should reject the “principle of sufficient reason.” (2) According to G.E.M. Anscombe, brute facts are the facts that make a non-brute fact true given the assumption that all other things are equal. For example, a person makes a promise given the brute facts of that person saying they will do something. This is only true if all else is equal and not in unusual circumstances. Perhaps a person doesn’t make a promise when joking around. This sense of “brute facts” is often contrasted with “institutional facts.”
burden of proof – (1) The requirement for a position to be justified during a debate. The burden of proof exists for a claim when the claim will be likely rejected by people until the claim is justified. The burden of proof can shift during a debate. For example, a good argument against a belief would shift the burden of proof onto anyone who wants to defend that belief. (2) The rational burden of proof is the property of a position that people should rationally reject unless at least minimal evidence can be given for it. For example, people have a rational burden of proof to have evidence that faeries exist, and we should reject the existence of faeries until that burden of proof is met.
capitalism – A type of economy with limited government regulation (a “free market”) and where the means of production (factories and natural resources) are privatized. Key features of capitalism includes competition between people who sell goods and services, the profit motive (which is expected to motivate people to compete), and companies.
care ethics – An ethical perspective that focuses on the dependence and importance of personal relationships, and the primary importance of caring for others. Care ethics tends to emphasize the special obligations we have towards one another because of our relationships, such as the obligation of parents to keep their children healthy. Care ethics is often understood to be part of the “moral sentimentalist” and “feminist” traditions, and it’s often believed to be incompatible with utilitarianism and Kant’s categorical imperative.
case-based reasoning – Reasoning involving the consideration of similar situations or things. For example, a doctor could consider the symptoms and cause of illness of various patients that were observed in the past in order to decide what is likely the cause of an illness of another patient who has certain symptoms. Case-based reasoning uses the following four steps for computer models: (a) Retrieve – consider similar cases. (b) Reuse – predict how the similar cases relate to the current case. (c) Revise – check to see if the similar cases relate to the current case as was predicted and make a new prediction if necessary. (d) Retain – once a prediction seems to be successful, continue to rely on that prediction until revision is necessary. Case-based reasoning is similar to “analogical reasoning.”
categorical – (1) Overriding, without exceptions, and absolute. For example, categorical imperatives. (2) Involving categories or types of things. For example, categorical syllogisms.
categorical imperative – An imperative is a command or requirement. Categorical imperatives are overriding commands or requirements that don’t depend on our desires, and are rational even if we’d rather do something else. For example, it is plausible that we have a categorical imperative not to run around punching everyone in the face just for entertainment. The mere fact that someone might want to do it does not make it morally acceptable. Categorical imperatives are often contrasted with “hypothetical imperatives.” People often speak of “the categorical imperative” to refer to “Kant’s Categorical Imperative.”
categorical proposition – A proposition concerning categories. For example, “all men are mortal” concerns two categories: Men and mortality.
categorical syllogism – A syllogism that consists of categorical propositions (propositions that concern various categories or “kinds of things”). For example, “All animals are mortal. All humans are animals. Therefore, all humans are mortal.”
category – (1) A grouping or set of things that share a characteristic. For example, animals, minerals, and persons. (2) The most general concepts. For example, space, time, and causation.
category mistake – A confusion between two categories that leads to an error in reasoning. For example, we might say that an essay tells us the types of biases people suffer from, but that uses a metaphor—essays cannot literally tell us anything. They aren’t the kind of thing to say things, so it would be a category mistake to believe essays literally say things.
causal determinism – See “determinism.”
causal theory of reference – The view that names of things (e.g. ‘water’) refer to the things because of how people referred to the object in history. It is generally thought by supporters that reference requires “reference fixing” (e.g. when someone decides what to call it) and “reference borrowing” (e.g. the name is passed on by other people who talk about it). For example, people at some point called the stuff they have to drink to stay hydrated ‘water’ and the fact that people kept calling it ‘water’ assured us that we use that word to refer to the same stuff as people did during the first time someone decided to call it something.
causal theory of knowledge – The view that the truth of statements cause our knowledge of the statements’ truth, or that facts cause knowledge of facts. For example, a cat that lays on a mat can cause our belief that the cat is on the mat insofar as it being there causes us to see it. If it’s completely impossible to interact with an entity and the entity makes absolutely no difference to us whatsoever, then we might wonder if the entity exists at all.
causation – One thing that makes something else happen. For example, a red rolling billiard ball that hits a blue billiard ball and makes the blue one move. Causation involves necessary connections and laws of nature. We can predict when one event will cause another event based on understanding the state of affairs that exist and the laws of nature.
certainty – See “epistemic certainty” or “psychological certainty.”
ceteris paribus – Latin for “with all else being equal” or when considered in isolation. For example, ceteris paribus, killing people is wrong. However, there might be overriding factors that justify killing others, such as when it’s necessary for survival.
character – (1) Persisting traits that are resistant to change and influence a person’s decision-making. A person’s character can exhibit various character traits, such as virtues (such as courage) and vices (such as addiction). (2) The domain of character traits, such as virtues and vices. Virtues and vices could be used to describe the actual decisions and actions a person tends to make rather than persisting properties that are resistant to change.
character ethics – A synonym for “virtue ethics.”
charity – (1) The virtue in a disagreement or debate to describe other people’s beliefs and arguments accurately rather than to misrepresent them as being less reasonable than they really are. If we are not charitable in this way, then we will create a fallacious “straw man” argument. (2) The virtue concerned with helping others who are in need. For example, giving money to the poor is often charitable in this sense. (3) An organization or institution that exists to try to help others who are in need. For example, the Red Cross or a soup kitchen.
cherry picking – Finding or using evidence that supports a position while simultaneously ignoring any potential counter-evidence against the position. See “one-sidedness” for more information.
circular argument – An argument with a premise that’s identical to the conclusion. For example, “All dogs are animals because all dogs are animals.” The logical form of a circular argument is “a; therefore a.” Circular arguments are similar to the “begging the question” fallacy. Also see “circular reasoning.”
circular reasoning – (1) Reasoning involving a set of mutually supporting beliefs that are not justified by anything other than the set of beliefs. A simple form of circular reasoning is the following—A is justified because B is justified; B is justified because C is justified; and C is justified because A is justified. For example, “we should agree that stealing is wrong because it should be illegal; we should agree that stealing should be illegal because we shouldn’t want people stealing from us; and we shouldn’t want people stealing from us because it’s wrong.” (2) A “circular argument.”
class conflict – The power struggle between social classes. The wealthy are often thought to fight to maintain their power and privilege and the working class is thought to fight to attain a greater share of power. For example, the working class could fight for a higher minimum wage, and the wealthy could fight to keep receiving corporate welfare. Karl Marx thought that class conflict also happens at the level of ideology—the wealthy tries to convince everyone else that those with wealth deserve to keep their wealth and maintain their privilege, but other people resist this ideology and offer alternatives.
class warfare – A synonym for “class conflict.”
cogent argument – An inductively strong argument with true premises. For example, “All objects that were dropped near the surface of the Earth in the past fell to the ground. Therefore, objects that are dropped near the surface of the Earth tomorrow will will probably fall to the ground.” See “strong argument” for more information.
cognition – A mental process. For example “inferential reasoning” is a form of cognition.
cognitive bias – A psychological trait that leads to errors in reasoning. For example, the “confirmation bias.”
cognitivism – A field concerning judgments that are true or false. For example, moral cognitivism states that moral judgments can be true or false. “Cognitivism” is often contrasted with “non-cognitivism.”
coherence – (1) The degree of consistency something has. For example, the beliefs “all men are mortal” and “Socrates is a man” are consistent. Contradictory beliefs are incoherent or “inconsistent.” (2) In ordinary language, “coherence” often refers to the degree of clarity and sense a person makes. Someone who is incoherent might say nonsense.
coherence theories of epistemology – See “coherentism.”
coherentism – The view that there are no foundational beliefs, but that some beliefs can be mutually supported by other beliefs. It is often claimed that an assumption is justified through coherence if it is useful as part of an explanation. Observation itself is meaningless without assumptions, and observation appears to confirm our assumptions as long as our observations are consistent with them. For example, my assumption that a table exists can be confirmed by touching the table, and my experiences involved with touching the table confirms my assumption that the table exists. Some philosophers argue that coherentism should be rejected because it legitimizes “circular reasoning,” which we ordinarily recognize as being a fallacious form of justification. However, coherentists claim that circular reasoning is not vicious as long as enough beliefs are mutually supporting.
common sense – (1) Beliefs or assumptions we are more certain about than the premises used by skeptical arguments against them, but it’s difficult or impossible to fully understand how we can be so certain about them. For example, G. E. Moore said he is absolutely certain that he knows that something existed before he was born and that something will still exist after he is dead. (2) Assumptions we hold without significant evidence when rejecting the assumptions does not appear to be a reasonable option. For example, we accept that inductive reasoning is effective even though we can’t prove it without circular reasoning. Rejecting inductive reasoning would lead to absurdity (and it would perhaps imply that we should reject all natural science altogether). (3) Beliefs or assumptions people tend to have prior to philosophical study. (4) According to Aristotle, common sense are the internal senses that are used to judge and unite experiences caused by sense perception (the five senses: sight, sound, touch, taste, and smell).
communism – (1) A type of economy where the means of production (factories and natural resources) are publicly owned rather than privatized, and where there are no social classes (i.e. there is no working class or upper class). The difference between communism and socialism is not entirely clear and the terms are often used as synonyms. (2) In ordinary language, “communism” often refers to a type of totalitarian political system and economy where the government owns all the businesses and controls the means of production.
commutation – A rule of replacement that states that “a and b” and “b and a” both mean the same thing. (“a” and “b” stand for any two propositions.) For example, we know that “all dogs are animals and all cats are animals” means the same thing as “all cats are animals and all dogs are animals.” If we use one of these statements in an argument, then we can replace it with the other statement.
commutation of conditionals – A fallacy committed by arguments that have the logical form “if a, then b; therefore if b, then a.” (“a” and “b” stand for any two propositions.) For example, “If all snakes are reptiles, then all snakes are animals. Therefore, if all snakes are animals, then all snakes are reptiles.”
commutative – To be able to switch symbols without a loss of meaning. “a and b” has the same meaning as “b and a.” For example, “dogs are mammals and lizards are reptiles” has the same meaning as “lizards are reptiles and dogs are mammals.”
compatibilism – The view that determinism and free will are compatible. Compatibilists often believe we actually have free will, and their conception of free will is compatible with determinism. For example, compatibilists could say that we are free as long as we can do whatever we choose to do. A person can be free to choose to spend the next ten minutes eating food or taking a shower; and she is likely able to be able to do either of those things assuming she chooses to. “Compatibilism” can be contrasted with “libertarian free will.”
complete theory – A theory is complete if and only if it can answer all relevant questions. For example, a normative theory of ethics is complete if it can determine whether any action is right or wrong.
completeness – See “semantic completeness,” “syntactic completeness,” “expressive completeness,” or “complete theory.”
complex question – A synonym for “loaded question.”
composition – (1) In logic, the term ‘composition’ refers to the “fallacy of composition.” (2) When a creditor agrees to accept a partial payment for a debt. (3) The arrangement of elements found in a work of art. (4) Producing a literary work, such as a text or speech.
compound proposition – A proposition that can be broken into two or more propositions. For example, “Socrates is a man and he is mortal” can be broken into the following two sentences: (a) Socrates is a man. (b) Socrates is mortal. “Compound propositions” can be contrasted to “non-compound propositions.”
compound sentence – See “compound proposition.”
comprehensiveness – The scope of a theory or explanation. A theory is more comprehensive than another if it covers a greater scope. Theories are more comprehensive if they are capable of explaining a greater number of observations or more types of phenomena. Consider the view that (a) it’s generally wrong to punch people and (b) the view that it’s generally wrong to hurt people. The view that it’s generally wrong to hurt people is more comprehensive because it can explain why many more actions are wrong than the view that it’s generally wrong to punch people.
conceptual analysis – A systematic study of concepts in an attempt to improve our understanding of them (perhaps to help us avoid confusion during debates). Conceptual analysis involves giving definitions, and giving necessary and sufficient conditions for using a term. Conceptual analysis could be revisionary by defining concepts in new ways or it can define concepts in ways that are almost entirely based on how people use language. For example, to say that killing people is generally the right thing to do would be revisionary to the point of absurdity because it would require a new definition for “right thing to do” that has little to nothing to do with how people use language. Even so, how people actually use language can be unstable, vague, or ambiguous; so revisionary definitions can be necessary.
conceptual framework – A systematic understanding of a field (such as morality) and all related concepts (such as moral duties, values, and virtues) that might not accurately represent reality, but it could exhibit various theoretical virtues. Conceptual frameworks provide a certain understanding of various concepts involved, but alternative ways of understanding the concepts could also be possible (or even superior).
conclusion – A statement that is meant to be proven or made plausible in consideration of other statements. For example, consider the following argument—“All men are mortal. Socrates is a man. Therefore, Socrates is mortal.” In this case “Socrates is mortal” is the conclusion. “Conclusions” are often contrasted with “premises.”
conclusion indicator – A term used to help people identify that an conclusion is being stated. For example, “therefore” or “thus.” See “conclusion” for more information.
concretism – The view that possible words exist just like the actual world, and that everyone from a possible world calls their own world the “actual world.” Concretism is an attempt to explain what it means to say that something is necessary or possible—something is necessary insofar as it’s true in every possible world, and something is possible insofar as it is true in at least one possible world. It is necessary that oxygen is O2 insofar as it’s true that oxygen is O2 in every possible world, and it’s possible that a person can jump over a small rock insofar as it’s true in at least one possible world. “Concretism” can be contrasted with “abstractism.” See “modality” and “modal realism” for more information.
conditional – (1) Something that happens or could happen depending on other facts. For example, making enough money for a living is often conditional on finding full time employment. (2) A “material conditional.”
conditional proof – A strategy used in natural deduction used to prove an argument form is logically valid that has an if/then proposition as a conclusion. We know the argument form is valid if we can assume the premises are true and the first part of the conclusion is true in order to deduce the second part of the conclusion. For example, consider the argument “If A, then B. If B, then C. Therefore, if A, then C.” We can use the following conditional proof to know this argument is valid:
If we can assume the first part of the conclusion (“A”) and the premises to prove the second part of the conclusion (“C”), then the argument is valid.
We know “if A, then B” is true and “A” is true, so we know “B” is true. (See “modus ponens.”)
We know “if B, then C” is true and “B” is true, so we know “C” is true. (See “modus ponens.”)
We have now deduced that the second part of the conclusion is true, so the argument form is logically valid.
conditionalization – Concerning how we ought to update our beliefs and degrees of confidence when we attain new information. For example, a person who believes all swans are white ought to reject that belief once she sees a black swan.
confirmation – Strong evidence supporting a hypothesis or theory. For example, the fact that all known species of birds are warm-blooded is confirmation of the hypotheses that all birds are warm-blooded.
confirmation bias – One of the most important forms of cognitive bias that is evident when people take supporting evidence of their beliefs too seriously while simultaneously ignoring or marginalizing the importance of evidence against their beliefs. For example, a person with the confirmation bias could take her experiences of white swans as evidence that all swans are white but ignore the fact that some people have seen black swans.
conjunct – The first or second part of a conjunction. The logical form of a conjunction is “a and b.” Both “a” and “b” are conjuncts. For example, the conjunction “all dogs are mammals and all mammals are animals” has two conjuncts: (a) all dogs are mammals and (b) all mammals are animals.
conjunction – (1) A proposition that says both of two things are true. The logical form of conjunctions is “a and b.” For example, “all doctors are humans and all humans are capable of reasoning.” There are two common symbols used for conjunction in formal logic: “&” and “∧.” An example of a statement using one of these symbols is “A ∧ B.” (2) A rule of inference that states that we can use “a” and “b” as premises to validly conclude “a and b.” (“a” and “b” stand for any two propositions.) For example, “Birds are animals. The Sun will rise tomorrow. Therefore, birds are animals and the Sun will rise tomorrow.”
consequent – (1) The second part of a conditional with the form “if a, then b.” (“b” is the consequent.) For example, consider the conditional statement “if all dogs are mammals, then all dogs are animals.” In this case the consequent is “all dogs are animals.” (2) A logical implication of various beliefs. For example, a person who believes that “if all dogs are mammals, then all dogs are animals” and “all dogs are mammals” can validly infer the consequent, “all dogs are animals.” (3) The result of an event. For example, a person who turns the light switch downward consequently turned the light off.
consequentialism – Moral theories that state that the consequences of actions determine which actions are right or wrong. For example, if we know what has intrinsic value, then we can compare each possible course of action and see which course of action will maximize intrinsic goodness (i.e. lead to the most positive value and least negative value). Consequentialist philosophers would argue that such an action would be the “most right” and actions that depart from the ideal will be “more wrong” to whatever extent they fail to do what is best. Sometimes “utilitarianism” is used as a synonym for “consequentialism.”
consistency – The property of lacking contradictions. To be logically consistent is to have beliefs that could all be true at the same time. For example, “all fish are animals” and “all mammals are animals” are both logically consistent. However, “all fish are animals” and “goldfish are robots” is inconsistent. We can compare “consistent” beliefs with “contradictions.”
consistent logical system – A logical system with axioms and rules of inference that can’t possibly be used to prove contradictory statements from true premises. See “formal logic,” “axioms,” and “rules of inference” for more information.
constant – See “logical constant” or “predicate constant.”
continuant – (1) A persisting thing. For example, we often think people persist through time and continue to exist from one moment to the next. (2) A persisting thing that “endures.” For example, people could persist through time and exist in their entirety at every moment despite going through many changes.
constructionism – See “constructivism.”
constructive dilemma – A rule of inference that states that we can use the premises “a and/or b,” “if a, then c,” and “if b, then d” to validly conclude “c and/or d.” (“a”, “b,” and “c” stand for any three propositions.) For example, “Either all dogs are mammals and/or all dogs are lizards. If all dogs are mammals, then all dogs are animals. If all dogs are lizards, then all dogs are reptiles. Therefore, all dogs are animals and/or reptiles.”
constructivism – (1) “Metaethical constructivism” is the view that morality is based on convention or agreement. Metaethical constructivism could claim that morality is based on our instinctual reactions or on a social contract. See “ideal observer theory” for more information. (2) The view that something is created through human interaction, agreement, or a common understanding. For example, the game “chess” and the presidency of the United States are constructed.
continental philosophy – A philosophical domain that often requires less precision and clarity in order to allow for a discussion of major issues. Continental philosophy is often a continuation of ancient philosophy involving highly abstract issues (such as the nature of “being”) and issues that directly affect our lives. “Continental philosophy” is often contrasted with “analytic philosophy.”
continuum fallacy – A fallacy that is committed by an argument that appeals to the vagueness of a term to unreasonably conclude something (usually based on the fact that we don’t know where to draw the line between two things). For example, we don’t know where to draw the line concerning how many hairs must be on a person’s head before that person is no longer bald, but we would commit the continuum fallacy to conclude from that fact that no one is bald. See “vagueness” for more information.
contingent truth – Propositions that are true based on some sort of dependence that “could have been otherwise.” Contingent statements are possible, but they are not necessary. For example, the fact that Socrates had a pug nose is a contingent truth. See “physical contingence,” “metaphysical contingence,” and “logical contingence.”
contingence – The property of being possible but not necessary. There is a sense in which contingent things “could have been otherwise.” Aristotle’s concept of an “accidental characteristic” refers to contingent characteristics. See “physical contingence,” “metaphysical contingence,” and “logical contingence.”
Contradiction – (1) When two propositions cannot both be true due to their logical form. “Socrates was a man” and “Socrates was not a man” are two statements that can’t both be true because the logical form is “a” and “not-a.” (“a” is any proposition.) (2) In categorical logic, contradiction is a process of negating a categorical statement and expressing it as a different categorical form. For example, “all men are mortal” can be contradicted as “some men are not mortal.”
contradictory – (1) In categorical logic, a contradictory is the negation of a categorical statement expressed in a different categorical form. For example, “no men are immortal” is the contradictory of “some men are immortal.” (2) When two propositions form a contradiction. For example, it’s contradictory to say, “Exactly four people exist” and “only two people exist.”
contraposition – (1) To switch the terms of a categorical statement and negate them both. There are two valid types of categorical contraposition: (a) “All a are b” means the same thing as “all non-b are non-a.” (b) “Some a are not b” means the same thing as “some non-b are not non-a.” For example, the following argument is valid—“Some snakes are not mammals. Therefore, some non-mammals are not non-snakes.” (2) To infer a contrapositive from a categorical proposition. See “contrapositive” for more information. (3) In modern logic, it is also known as “transposition.”
contrapositive – A categorical proposition is the contrapositive of another categorical proposition when the terms are negated and switched. For example, the contrapositive of “all mammals are animals” is “All non-animals are non-mammals.” It is valid to infer the contrapositive of two different types of categorical propositions because they both mean the same thing: (a) “All a are b” means the same thing as “all non-b are non-a.” (b) “Some a are not b” means the same thing as “some non-b are not non-a.” For example, “some people are not doctors” means the same thing as “some non-doctors are not non-people.”
contrary propositions – Propositions that are mutually exclusive. For example, “Socrates is a man” and “Socrates is a dog” are contrary propositions. (Both statements refer to the historical philosopher.)
convention – What is true based on agreement or a common understanding. For example, it’s a convention that people drive on the right side of the road in the United States (on two way roads), so it would be generally wrong to drive on the left side of the road in the United States.
converse – A categorical proposition or if/then statement with the two parts switched. The converse of “all a are b” is “all b are a.” (“a” and “b” are any two terms.) The converse of “if c, then d” is “if d, then c.” (“c” and “d” are any two propositions.) For example, the converse of “if all fish are animals, then all fish are organisms” is “if all fish are organisms, then all fish are animals.” It is valid to infer the converse of any categorical statement with the form “no a are b” or “some a are b.” See “conversion” for more information.
conversion – To switch the terms of a categorical statement. There are two valid types of conversion: (a) “No a are b” means the same thing as “no b are a.” (b) “Some a are b” means the same thing as “some b are a.” For example, the following is a valid argument—“No birds are dogs. Therefore, no dogs are birds.”
corpuscles – Small units of matter of various shapes and with various physical properties that interact with one another.
correlation – When two events or characteristics are found together. For example, it’s a correlation that not drinking water and thirsty people are found together. “Correlation” can be contrasted with “causation.”
correspondence theory of truth – The view that true propositions correspond or relate properly to facts (or to reality). Correspondence theories of truth are compatible with “factual truths” and various forms of “realism.” The “correspondence theory of truth” is often contrasted with the “deflationary theory of truth.”
corrigible – Propositions or beliefs that can be improved or corrected by new information.
counter evidence – Evidence against a belief.
counterargument – An objection to an objection. An argument used to refute, disprove, or oppose an objection. For example, someone could argue against the belief that hurting people is always wrong by saying, “Hurting people in self-defense is never wrong, so it can’t always be wrong to hurt people.” Someone else can respond to that objection by giving a counterargument and saying, “Hurting people in self-defense is wrong when it involves excessive force, such as when we kill someone just for kicking us.”
counterexample – (1) An object or state of affairs that disproves a belief. For example, a white raven disproves the belief that “all ravens are black.” (2) An argument meant to prove another argument to be logically invalid by using the same argument form as the other argument, but the counterargument must have obviously true premises and an obviously false conclusion. Consider the invalid argument, “If dogs are lizards, then dogs are reptiles. Dogs are not lizards. Therefore, dogs are not reptiles.” A counterexample would be, “If dogs are reptiles, then dogs are animals. Dogs are not reptiles. Therefore, dogs are not animals.”
counterfactual – Conditional statements about what would be the case if something else wasn’t the case (that is actually the case). For example, “If Socrates was not a mortal, then Socrates was not a human.” Socrates was a mortal, so the counterfactual requires us to imagine what would have been the case if things were different.
counterintuitive – Something that conflicts with what we think we know for some reason. For example, it would be counterintuitive to find out that other people don’t have mental activity. What we find counterintuitive is often taken to be a reason for thinking something is false, but sometimes what we initially find to be counterintuitive is proven to be true. For example, people find it (at least mildly) counterintuitive to think that large objects fall at the same speed as small ones, but it’s been proven to be true.
credence – A subjective degree of confidence concerning the odds we believe that something could be true. For example, it would seem irrational to be highly confident that the law of gravity will no longer exist tomorrow. See “psychological certainty.”
credence function – A comparison between the actual state of the world and the credence (subjective degree of confidence) a person has of the world being that way. Ideally people will have a strong credence towards factual statements. For example, people should be very confident that more than five people exist considering that society couldn’t function without thousands of people existing.
criteria – A standard used for making distinctions. For example, empiricists think the only relevant criteria that determines if something is a good justification is that it’s based appropriately on empirical evidence (observation).
critical reasoning – A synonym for “critical thinking.”
critical thinking – An understanding of argument analysis and fallacies. It is often equated with “informal logic,” but any qualities that lead to an increased understanding of rationality and an increased ability to be reasonable could be involved.
criticism – (1) An argument that is meant to persuade us to reject a belief of another argument. See “objection.” (2) Disparaging remarks, fault-finding, or judging something as falling short of certain requirements or standards.
cum hoc ergo propter hoc – Latin for “with this, therefore because of this.” A logical fallacy committed when an argument concludes that something causes something else to happen due to a correlation. For example, the fact that a person takes a sugar pill before recovering from an illness doesn’t prove that she recovered from the sugar pill. She might have recovered for some other reason. This fallacy is a version of the “false cause” fallacy.
cultural evolution – A synonym for “sociocultural evolution.”
cultural relativism – (1) The view that moral statements are true because we agree on their truth (or merely because we believe they are true). Rape and murder would be considered wrong for a society if that society agrees that they are wrong, but might be considered to be right in another culture. Cultural relativism refers to the view that moral statements are true because a culture agrees with them, but other forms of moral relativism could be individualistic—what’s right and wrong could depend on the individual. One form of relativism is the view that morality is determined by a “social contract.” Relativism should be contrasted with the view that an action could be either right or wrong depending on the context. (2) The view that the moral beliefs of various cultures differ. What one culture says is right or wrong is often different from what another culture says is right and wrong.
cynicism – (1) The practice of a philosophical group known as the cynics. The cynics were skeptical of argumentation and theorizing, and they focused on becoming virtuous, which they generally didn’t think required very much argumentation or theorizing. Cynics generally focused on being happy, free from suffering, and living in accordance with nature. The cynics were known for disregarding cultural taboos and believing that taboos are irrelevant to being virtuous. (2) It often refers to a pessimistic attitude. Cynicism can be characterized by mistrust towards people and the expectation that people will misbehave. (3) A skeptical attitude characterized by criticism towards various beliefs and arguments.
Das Man – German for “they self” and often translated as “the they” or “the one.” Martin Heidegger uses this term to refer to the social element of human beings—that we act for others, and that our thoughts are based on those of others. For example, we tell our children “one shares toys with others” when we want to teach them social norms.
Dasein – German for “being there.” Martin Heidegger’s term used for human beings to emphasize the view that they are not objects. Heidegger rejected the subject/object distinction and thought it led to a mistaken view dualism—that the mind and body are totally different things. Dasein is used as a verb rather than a noun to emphasize that we are what we do and not an object of some sort.
de dicto – Latin for “of the word” or “of what is said.” For example, a person can consistently believe that water (i.e. a liquid we drink for survival that freezes when cold and turns to gas when hot) can boil at a lower temperature than H2O (a molecule consisting of two different chemical elements) under a de dicto interpretation. “De dicto” is often contrasted with “de re.”
de facto – Latin for “concerning fact.” Used to describe the actual state of affairs or practice regardless of what’s right or lawful. For example, a dictator could find an illegitimate way to attain power and be a ruler de facto. “De facto” is often contrasted with “de jure.”
de jure – Latin for “concerning law.” Used to describe a situation in terms of the law or ethical considerations. For example, a dictator who attains power illegitimately would not be in power de jure. “De jure” is often contrasted with “de facto.”
de re – Latin for “of the thing.” For example, a person can not coherently believe that water can boil at a lower temperature than H2O because they both refer to the same thing under a de re interpretation. “De re” is often contrasted with “de dicto.”
debate – A prolonged discussion concerning a disagreement that is characterized by two or more opposing sides that (a) try to give reasons to believe a conclusion, (b) try to explain why the conclusions of the opposing side should be rejected, and (c) try to explain why the arguments given by the opposing side should be rejected. Debates need not be between two people and they need not exist in a face-to-face presentation. A single philosophical essay can be considered to be part of a debate that’s been going on for hundreds or thousands of years by philosophers in different time periods who read various arguments and respond to them.
decidability – A question is decidable if we can determine the answer. For example, logical systems are supposed to be able to determine if arguments are valid. An argument that can’t be determined to be valid by a logical system would be “undecidable” by that logical system. Any logical system that can’t determine if an argument is valid is semantically incomplete. See “semantic completeness” for more information.
decision theory – See “utility theory.”
deconstructionism – A philosophical domain concerned with examining the assumptions behind various arguments and beliefs (known as “deconstruction”).
deduction – Reasoning or argumentation that attempts to prove a conclusion is true as long as we assume the premises are true. Good deductive arguments are logically valid. For example, “All men are mortal. Socrates is a man. Therefore, Socrates is a mortal” is a logically valid deductive argument. Deduction is often contrasted with “induction.”
deductive reasoning – See “deduction.”
deductively complete – See “syntactic completeness.”
default position – The position that lacks the burden of proof before debate begins (perhaps because it is rationally preferable). For example, the default position of a debate tends to be an undecided point of view against both those who are for and and those who are against some belief. Both sides of a debate are therefore expected to argue for their particular beliefs.
defeasible – Reasoning is defeasible if it’s rationally compelling without being logically valid. The support the premises have for the conclusion could be insufficient depending on certain unstated facts. A defeasible argument can be defeated by additional information. Defeasible arguments could be considered to be reasons to believe something, all things equal—one consideration in favor of a conclusion. The opposite of “defeasible” is “indefeasible.”
defeater – The information that can defeat a defeasible argument. Defeaters are reasons against conclusions that are more important than the previous defeasible support for the conclusion.
defense – (1) A defensive argument against an objection (i.e. a “counterargument”). (2) A response to various objections in an attempt to explain why they aren’t convincing. (3) The opposition to an attack.
definable concept – A concept that can be defined and understood in terms of other concepts. For example, we can define “valid argument” as an argument with a form that assures us that it can’t have true premises and a false conclusion at the same time. “Definable concepts” can be contrasted with “primitive concepts.”
definiendum – The term that is defined by a definition. Consider the definition of “argument” as “one or more premises that supports a conclusion.” In this case the definiendum is “argument.” “Definiendum” can be contrasted with “definiens.”
definiens – The definition of a term. Consider the definition of “premise” as “a proposition used to give us reason to believe a conclusion.” In this case the definiens is “a proposition used to give us reason to believe a conclusion.” “Definiens” can be contrasted to “definiendum.”
deism – The view that one or more gods exist, but they are not people and/or they don’t interefere with human affairs. For example, Aristotle’s first cause (i.e. prime mover).
deity – See “god.”
deflationary – (1) The property of involving truth or reality without involving it as strongly as we might otherwise expect. Deflationary truth involves truth without any assumption regarding realism, but deflationary metaphysics could be compatible with realism (i.e. the existence of facts). (2) To have the property of shrinking or collapsing.
deflationary theory of truth – The view that to assert a statement to be true is merely to assert the statement, and that there is nothing more to be said about what “truth” means. The deflationary theory of truth is compatible with “nonfactual truths” and are sometimes contrasted with the “correspondence theory of truth.”
deflationism – See the “deflationary theory of truth.”
Demiurge – (1) A godlike being theorized by Plato that is thought to be similar to an artisan who crafts and maintains the physical universe. Plato did not describe the Demiurge as the creator of the entire physical universe, and Platonists often thought that the entire physical universe was created or dependent on a greater being called “the Good.” (2) According to Neoplatonists, the Demiurge is “Nous” (the mind or intellect of the Good).
democracy – A political system where people share power by voting. Many democracies have people vote for “representatives” who have the majority of the ruling power. (Representative democracies are also known as “republics.”)
DeMorgan’s laws – A rule of replacement that takes two forms: (a) “It’s not the case that both a-and-b” means the same thing as “not-a and/or not-b.” (b) “It’s not the case that a and/or b” means the same thing as “not-a and not-b.” (“a” and “b” stand for any two propositions.) For example, “it’s not the case that dogs are either cats or lizards” means the same thing as “no dogs are cats, and no dogs are lizards.”
denying a conjunct – A logical fallacy committed by arguments with the following form—“It’s not the case that both a-and-b. Not-a. Therefore, b.” This argument form is logically invalid. For example, “Socrates isn’t both a dog and a person. Socrates isn’t a dog. Therefore, Socrates isn’t a person.”
denying the antecedent – An invalid argument with the form “if a, then b; not-a; therefore, not-b.” A counterexample is, “If all dogs are reptiles, then all dogs are mammals. It’s not the case that all dogs are reptiles. Therefore, it’s not the case that all dogs are mammals.”
deontic logic – A formalized logical system that uses “deontic quantifiers.”
deontic quantifier – A symbol used in formal logic to state when an action is obligatory (O), permissible (P), or forbidden (F). For example, “Op” means that “p” is obligatory.
deontology – Moral theories that state that there is something other than consequences that determine which actions are right or wrong, but deontologists also reject virtue ethics (which is primarily concerned with what it means to be a good person rather than what actions are right or wrong). For example, see “Kant’s Categorical Imperative.” “Deonology” is often contrasted with “consequentialism.”
derivation – A formal proof of a proposition expressed in formal logic. A derivation can be described as a series of statements that are implied by rules of inference, axioms of a logical system, or other statements that have been derived by those two things. For example, a logical system could have an axiom that states “a or not-a” and have a rule of inference that states “a implies a or b.” In that case the following is a derivation—“a or not-a. Therefore, a or not-a or b.” See “axioms,” “rules of inference,” “logical system,” and “theorem” for more information.
descriptive – (1) Statements that help us understand the nature of things or aspects of reality. (2) Value-free information about the nature of things or reality. “Descriptive” is often contrasted with “prescriptive” or “evaluative.”
desire – Motivation or yearning. For example, a hungry person desires food. Desire is sometimes thought of only as motivation related to the body rather than as motivation caused by reasoning or ethical considerations. “Desire” can be contrasted to Immanuel Kant’s conception of “good will.”
desire-dependent reason – A reason for an action that depends on a desire. For example, a person who yearns to eat chocolate has a reason to eat chocolate. “Desire-dependent reasons” can be contrasted with “desire-independent reasons.”
desire-independent reason – A reason for action other than a desire. For example, John Searle argues that promises are desire-independent reasons. If you promise to do something, then you have a reason to do it, even if you don’t desire to do it. “Desire-independent reasons” can be contrasted with “desire-dependent reasons.”
destiny – (1) A fated course of events, which is generally thought to be fated due to a person having a certain purpose. For example, King Arthur could have been said to be destined to become a king insofar as he was meant to be a king and would become a king no matter what choices he made. (2) A probable future event involving a person’s purpose that could be willfully achieved, but could be avoided given resistance. Perhaps King Arthur was destined to become king and could make choices to become the king, but could have fought against his destiny and become a blacksmith instead.
determinism – The view that everything that happens is inevitable and couldn’t have been otherwise. Causal determinism is the view that the prior state of the universe and laws of nature were sufficient to cause later states of the universe. Determinism is not necessarily incompatible with the view that our decisions help determine what happens in the world, but the decisions we make could also be determined.
deterrence – A justification for punishment in terms of the fear that the punishment causes people in order to prevent crimes. Rational people are expected to choose not to commit the crime in order to avoid punishment. For example, many people argue that the death penalty is a justified to use to punish murderers because it will deter murderers from killing more people.
deus – Latin for “god” or “divinity.”
deus ex machina – Latin for “god from the machine.” Refers to solving problems via miracles, or in unreasonable and simplistic ways.
dialectic – A process involving continual opposition and improvement. For example, Socratic dialectic occurs during a debate when hypotheses are presented, proven to be inadequate, then new and improved hypotheses are presented. Someone could claim that justice is refusing to harm people; and someone else could argue that sometimes it’s unjust to refuse to help someone, so justice can’t be sufficiently defined as merely refusing to harm people. A new claim could then be presented that defines justice as refusing to harm people and being willing to help people. One conception of dialectic is said to consist of at least one “thesis,” “antithesis,” and “synthesis.”
dialectical materialism – The view that economic systems face various problems and solutions are offered for those problems until they are replaced by an improved economic system. For example, slavery was replaced by feudalism, and feudalism was replaced by capitalism; and each of these systems faced fewer or less severe problems than those that existed previously. Dialectical materialists often think that communism is the ultimate economic system that will no longer face problems. See “dialectic” for more information.
dialetheism – The view that the “law of non-contradiction is false”—that contradictions can exist. If dialetheism is true, then a statement can be both true and false at the same time.
dictatorship – A political system defined by a single person who has the supreme power to rule.
difference principle – A principle of John Rawls’s theory of justice (i.e. “Justice as Fairness”) that requires that we only allow economic and social inequality if it benefits the least-well-off group of society. For example, many people believe that capitalism helps both the rich and the poor insofar as it motivates people to work hard to make more money (which could lead to economic prosperity), and the difference principle could be used to justify an unequal distribution of wealth assuming it can justify capitalism in this way.
Ding an sich – German for “thing in itself.”
discursive – (1) Involving “inferential reasoning.” (2) Rambling or discussing a wide range of topics.
discursive concept – According to Immanuel Kant, discursive concepts are general concepts known through inferential reasoning or experience rather than concepts known from a “pure intuition” (that don’t depend on experience or generalization). For example, the concept of the person is a discursive concept because we can only understand the concept of the person from having various experiences and generalizing from those experiences. “Discursive concepts” can be contrasted with “non-discursive concepts.”
discursive reasoning – A synonym for “inferential reasoning.”
disjunct – The first or second part of a disjunction. Disjunctions have the form “a or b,” so both “a” and “b” are disjuncts. Consider the disjunction, “either Socrates is a man or he’s a dog.” That disjunction has two disjuncts: (a) Socrates is a man and (b) Socrates is a dog.
disjunction – An either-or proposition. Disjunctions have the logical form “a or b.” The symbol for disjunction in symbolic logic is “∨.” An example of a statement using this symbols is “A ∨ B.” There are two kinds of disjunctions—the “inclusive or” and the “exclusive or.”
disjunctive syllogism – A valid argument form with the following form – “a or b; not a; therefore b.” For example, “Either all dogs are reptiles or all dogs are mammals. Not all dogs are reptiles. Therefore, all dogs are mammals.”
dispreferred – See “suberogatory.”
distribution – (1) When a categorical statement applies to all members of a set or category. For example, the statement, “all cows are mammals,” distributes cows, but not mammals because it says something about all cows, but it doesn’t say anything about all mammals. (2) A rule of replacement that takes two forms: (a) “a and (b and/or c)” means the same thing as “(a and/or b) and (a and/or c).” (b) “a and/or (b and c)” means the same thing as “(a and b) and/or (a and c).” (“a”, “b,” and “c” stand for any three propositions.) For example, “all lizards are reptiles, and all lizards are either animals or living organisms” means the same thing as “either all lizards are reptiles or animals, and either all lizards are reptiles or living organisms.” (3) The way something is given away. For example, “distributive justice.” (4) Statistical differences. For example, “probability distribution.”
distributive justice – The domain of economic justice concerned with how we should determine the allocation or distribution of goods, services, opportunities, and privileges. For example, laissez-faire capitalism distributes goods and services based on voluntary transactions. In general, people will conduct business to make money and use the money to buy other goods and services. However, some people believe that distributive justice demands that we engage in redistribution of wealth because they believe it would be unjust to allow people who have no money to suffer or starve to death.
divine command theory – The view that things are right or wrong because one or more gods commands us to behave a certain way (or favors us to behave a certain way). For example, murder is wrong because one or more gods commands us not to murder other people. Divine command theory requires us to reject that there is rational criteria that determines right and wrong. For example, the divine command theorist might say that God commands us not to murder other people, but that God has no reason to command such a thing other than perhaps having various emotions. Many people reject divine command theory because of the “Euthyphro dilemma.” Many people believe that divine command theory is a form of “subjectivism” because right and wrong would merely describe the subjective states of one or more gods.
divine plan – (1) A course of events that were fated from a divinity. (2) A synonym for “divine providence.”
divine providence – The view that everything that happens in the universe is guided and controlled by a divinity. It is generally believed that the divinity controls the universe to make sure that better things happen than would happen otherwise. Sometimes it is believed that the divinity assures us that everything that happens is predestined and “for the best” (or at least “everything happens for a good reason”) It is often thought that divine providence is a logical consequence of the assumption that God exists and is all-good, all-knowing, and all-powerful; and it is often thought to conflict with our experiences of evil in the world. See “the problem of evil” for more information.
divinity – A god or godlike being. See “God,” “Demiurge,” “Monad,” “the Good,” or “Universal Reason.”
division – (1) See “fallacy of division.” (2) A mathematical operation based on a ratio or fraction. For example “4 ÷ 2 = 2.” (3) To split objects into smaller parts.
doctrine of the maturity of chances – (1) The false assumption that the past results of a random game will influence the future results of the game. For example, a person who loses at black jack five times in a row might think that she is more likely to win if she plays another game. (2) See the “gambler’s fallacy.”
dogmatism – Close-mindedness. To be unwilling to change one’s mind even if one’s beliefs are proven to be unreasonable.
dominance – (1) See “stochastic dominance.” (2) Relating to having control over others.
double negation – (1) A rule of replacement that states that “a” and “not-not-a” both mean the same thing.(“a” stands for any proposition.) For example, “Socrates is a man” means the same thing as “it’s not the case that Socrates isn’t a man.” (2) A “double negative.” When it’s said that something isn’t the case twice. For example, “it’s not the case that Mike didn’t turn the TV on means the same thing as “Mike turned the TV on.”
downing effect – The tendency for people with below average IQ to overestimate their IQ, and for people with above average IQ to underestimate their IQ. This bias could be related to the “Dunning-Kruger effect.”
doxastic – Something that relates to beliefs or is a lot like a belief, such as judgment or desire.
doxastic logic – A formal logical system with modal operators for having various beliefs.
dualism – (1) The view that there are two fundamental different kinds of things, such as mind and matter. See “property dualism” and “substance dualism” for more information. (2) A binary opposition, such as that between good and evil.
due process – (1) Procedures and safeguards to protect our rights. For example, the right to a fair trial. (2) Rights that are needed for appropriate dispute resolution, such as the right to appeal, to defend oneself from accusations, and to protect oneself from unjustified harm or punishment.
duty – (1) What must done. See “obligation.” (2) In Metaphysische Anfangsgründe der Tugendlehre, Immanuel Kant described “duty” as a normative continuum ranging from obligatory to heroic. Some philosophers believe that Kant always had this definition of duty in mind. (3) The concept of duty used by the Stoics was that of an appropriate action—actions that are rationally preferable. The Stoic concept of duty was not of what must be done as it often implies in our day and age.
Dunning–Kruger effect – The cognitive bias defined by the tendency of unskilled people to overestimate how skilled they are because they don’t know about all the mistakes they make. This bias could cause many people to be overconfident concerning the likelihood that their beliefs are justified or true. This bias is likely related to the “the Downing effect.”
E-type proposition – A proposition with the form “no a are are b.” For example, “no cats are reptiles.”
economy – (1) A system involving the production of goods and services, and wealth distribution. See “capitalism” and “socialism” for more information (2) Thrifty management.
efficient cause – That which makes something move around or makes things change. For example, the efficient cause of a billiard ball’s movement could be the event of another billiard ball that rolled into it.
egoism – Relating to oneself. See “ethical egoism” or “psychological egoism.”
eliminative materialism – (1) The view that the mind does not exist as many people think, and that the concepts of “folk psychology” (e.g. beliefs and desires) are inaccurate views of reality. The mind is understood instead as certain brain activity or functions. (2) The view that physics describes reality as it exists best and nothing outside of physics describes reality accurately. Eliminative materialism endorses a form of reductionism that requires us to try to find out the parts something is made out of to find out what it really is. For example, eliminative materialists tend to think that psychological activity is actually brain activity, and they are likely to reject the existence of “qualia.” “Eliminative materialism” requires us to reject “emergence.”
eliminative reductionism – The view that the ultimate reality is made up of small parts, like subatomic particles. We can find out what things really are by finding out what parts they are made of. For example, water is actually H2O (or whatever H2O is made of). The physicalist conception of “eliminative reductionism” is “eliminative materialism.”
eliminativism – See “eliminative materialim.”
emergence – (1) Epistemic emergence refers to our inability to know how to reduce one phenomenon into another. For example, chemistry is epistemically emergent insofar as we don’t know how to reduce it to physics—the laws of physics seem insufficient to predict the behavior of all chemical reactions. (2) Metaphysical emergence refers to when something is “greater than the sum of its parts” or the irreducible existence of a phenomenon that exists because of an underlying state of affairs. For example, some scientists and philosophers think that the mind is an emergent phenomena that exists because of brain activity, but the mind is not the same thing as brain activity.
emanation – How lower levels of existence, such as physical reality, flows from and depends on an ultimate eternal being. Those who believe in emanation tend to think that the ultimate reality is God or “the Good.” Emanation is the idea that creation is ongoing and eternal rather than out of nothing. In that sense the physical universe has always existed.
emanationism – The view that reality as we know it exists from emanation—all of existence as we know it depends on and constantly flows from an ultimate eternal being. See “emanation” for more information.
emotivism – An anti-realist noncognitive metaethical theory that states that moral judgments are emotional expressions. For example, saying, “The death penalty is immoral” actually expresses one’s preference against the death penalty and it means something like saying, “The death penalty, boo!” Although emotivism expresses emotions, the emotions we express when we make moral judgments don’t have to actually be experienced by anyone.
empirical – Evidence based on observation.
empirical apperception – According to Immanuel Kant, this is the consciousness of an actual self with changing states or the “inner sense.” “Empirical apperception” can be contrasted with “transcendental apperception.”
empirical intuition – Intuitive justification that is based on a person’s background knowledge concerning observation (empirical evidence). It can be difficult for a person to explain why they find various beliefs to be plausible even if they are based on her observations, and she can say that those beliefs are “intuitive” as a result. For example, it was intuitive for many early scientists to expect objects that fall from a moving surface (such as a sailing ship) to continue falling in the same direction they were moving at, and we have confirmed that belief to be true (depending on various other factors). This belief is now a rational expectation based on the law of inertia (Newton’s First Law of motion)—an object at rest stays at rest and an object in motion stays in motion with the same speed and direction unless it is acted upon by an outside force.
empiricism – The philosophical belief that all knowledge about the world is empirical (based on observation). Empiricists believe that we can know what is true by definition without observation, but that beliefs about the world must be based on observation. Empiricists reject innate ideas, noninferential reasoning, and self-evidence as legitimate sources of knowledge.
end in itself – Something that should be valued for its own sake. See “final end.”
endurance theory – See “enduratism.”
endurantism – The view of persistence and identity that states that a persisting thing is entirely present at every moment of its existence. Endurantists believe that things can undergo change and still be the same thing. For example, a single apple can be green and then turn red at a later time. Endurantists believe that persisting things have spatial parts, but they don’t have temporal parts. See “temporal parts” for more information. “Endurantism” is often contrasted with “perdurantism.”
endure – (1) For a single thing to fully exist at any given moment in time, and to continue to exist at different moments in time. Things that endure could undergo various changes, but are not considered to be “different things” as a result. For example, an apple can be green at an earlier point in time, and it can turn turn red at a later point in time. See “endurantism” for more information. (2) To survive adversity or to continue to exist despite being changed. (3) To tolerate an attack or insult.
entailment – (1) A logical implication that is properly relevant or connected. For example, “if all dogs are mammals, then Socrates is a man” is true, according to classical logic, but it is counterintuitive and could even be considered to be false in ordinary language. “Relevance logic” is an attempt to make better sense out of how implications should be properly connected as ordinary language requires them to be. (2) A valid logical implication. The premises entail the conclusions of valid arguments.
enthymeme – (1) A categorical syllogism with an unstated premise. For example, “all acts of abortion are immoral because all fetuses are persons.” In this case the missing premise could be “all acts of killing people are immoral.” (2) Any argument with an unstated premise or conclusion. For example, “all fetuses are people and all acts of killing people are immoral” has the unstated conclusion “all acts of abortion are immoral.”
entity – A phenomenon, being, part of reality, or thing that exists.
eon – See “æon.”
epicureanism – (1) The philosophy of Epicurus who thought that everything is physical, that pleasure is the only good, pain is the only evil, and that gods don’t care about human affairs. (2) The view or attitude that mindless entertainment and pleasures are more important than intellectual or humanitarian pursuits.
epiphenomenalism – The view that psychological phenomena has no effect on nonpsychological physical phenomena. If epiphenomenalism is true, then our thoughts and decisions could be a byproduct of a brain and be incapable of making any difference to the motions of our body. For example, stopping pain would never be a reason that we actually decide to see the dentist when we have a cavity. Instead, the brain might fully determine that we go to the dentist based on the physical motion of particles.
epistêmê – Greek for “theoretical knowledge.”
epistemic anti-realism – The view that there are no facts relating to rationality or justification (other than what is true based on our mutual interests or collective attitudes). For example, we might say that it’s true that believing “1+1=3” is unjustified and irrational, but anti-realists might say that we merely tend to dislike the concept of some people believing such a thing, and this mutual interest led to talk concerning what we ought not believe.
epistemic certainty – The degree of justified confidence we have in our beliefs. To be certain that something is true could mean (a) that we have a maximal degree of justification for that belief, (b) that we can’t doubt that it’s true, or (c) that it’s impossible for the belief to be false. To be absolutely certain that something is true is to have no chance of being wrong. For example, we are plausibly absolutely certain that “1+1=2.”
epistemic externalism – (1) The view that proper justification (or knowledge) could be determined by factors that are external to the person. For example, reliabilists think that a belief is only justified if it’s formed by a reliable process (e.g. scientific experimentation). (2) The view that a person does not always have access to finding out what makes her beliefs justified. For example, we think we know that induction is reliable, but we struggle to explain how we could justify such a belief with an argument. (3) The view of justification as being something other than the fulfillment of our intellectual duties. For example, beliefs could be justified if they are more likely true than the alternatives. Newton’s theory of physics was unable to predict the motion of Mercury around the Sun, but Einstein’s theory of physics was able to, so that one consideration seems to imply that Einstein’s theory is more likely true or accurate.
epistemic internalism – (1) The view that proper justification (or knowledge) can only be determined by factors that are internal to the person. For example, “mentalism” states that only mental states determine if a belief is justified. (2) The view that a person can become aware of what makes her beliefs justified through reflection. For example, everyone who knows “1+1=2” can reflect about it to find out how their belief is justified or they don’t know it after all. (3) The view that justification concerns the fulfillment of our intellectual duties. For example, justification would require that we fulfill the duty not to contradict ourselves.
epistemic intuitionism – The view that we can justify various beliefs using intuition, and it’s generally a form of rationalism.
epistemic modality – The distinction between what is believed and what is known. Moreover, epistemic modality can involve the degree of confidence a belief warrants. For example, we know that more than three people exist and we are highly confident that this belief is true. We communicate epistemic modality through terms and phrases, such as “probably true,” “rational to believe,” “certain that,” “doubt that,” etc.
epistemic naturalism – The view that that natural science (or the methods of natural science) provides the only source of factual knowledge. Knowledge of tautologies or what’s true by definition is not relevant to epistemic naturalism. “Epistemic naturalism” is often only used to refer to one specific field. For example, one could be an epistemic naturalist regarding morality, but not one regarding logic. A moral epistemic naturalist would think that we can learn about morality through natural science (or the same methods used by natural science). See “empiricism” for more information.
epistemic objectivity – Beliefs that are reliably justified (e.g. though observation or the scientific method) or justified via a process that can be verified by others using some agreed-upon process. In this case the existence of laws of nature would be objective, but the existence of a person’s pain might not be. “Epistemic objectivity” can be contrasted with “epistemic subjectivity.”
epistemic randomness – When something happens that is not reliably predictable. For example, when we roll a six-sided die, we don’t know what number will come up. We say that dice are good for attaining random results for this reason. “Epistemic randomness” can be contrasted with “ontological randomness.”
epistemic realism – The view that there is at least one fact of rationality or justification that does not depend on a social construction or convention. Epistemic realists often think there are certain things people should believe and that people are irrational if they disagree. For example, it is plausible that we should agree that “1+1=2” because it’s a rational requirement.
epistemic relativism – Also known as “relativism of truth.” The view that what is true for each person can be different. For example, it might be true for you that murder is wrong, but not for someone else. Relativism seems to imply that philosophy is impossible because philosophers want to discuss reality and what’s true for everyone. Relativism is often said to be self-defeating because it makes a claim about everything that’s true—but that implies that relativism itself is relative.
epistemic state – A psychological state related to epistemology, such as belief, degrees of psychological certainty, perception, or experience.
epistemic subjectivity – Beliefs that are unreliably justified or beliefs that can’t be verified by others through some agreed-upon process. For example, a plausible example is believing something is true because it “feels right.” “Epistemic subjectivity” can be contrasted with “epistemic objectivity.”
epistemic utility theory – The view that we should determine our epistemic states (e.g. beliefs) on which epistemic states we value the most. Epistemic states could be said to be “rational” when they are states we significantly value more than the alternatives, and “irrational” when they are states we significantly value less than the alternatives. For example, the epistemic state of believing that jumping up and down is the best way to buy doughnuts is significantly worse than the alternatives, so believing such a thing is irrational.
epistemic vigilance – Attributes and mechanisms that help people avoid deception, manipulation, and confusion. For example, we intuitively tend not to trust claims that seem to be “too good to be true” from people who want to sell us something, which helps us stay vigilant against those who want to manipulate us.
epistemology – The philosophical study of knowledge, rationality, and justification. For example, empiricism is a very popular view of justification, and the scientific method is generally a reliable source of knowledge.
equivalence – A rule of replacement that takes two forms: (a) “a if and only if b” means the same thing as “if a, then b; and if b, then a.” (b) “a if and only if b” means the same thing as “a and b” and/or “not-a and not-b.” (“a” and “b” stand for any two propositions.) For example, “Socrates is a rational animal if and only if Socrates is a person” means the same thing as “Socrates is a rational animal and a person, or Socrates is not a rational animal and not a person.”
equivocation – A fallacy that is committed when an argument requires us to use two different definitions for an ambiguous term. For example, someone could argue that everyone has a family tree and trees are tall woody plants, so everyone has a tall woody plant. See “ambiguity” for more information.
equivocal – Ambiguous words or statements (that could have more than one meaning or interpretation). For example, the word “social” can refer to something “socialistic” (e.g. social programs) or to something that has to do with human interaction (e.g. being social by spending time talking to friends).
error theory – (1) The view that states that all moral statements are literally false because they don’t refer to anything, even though moral statements are meant to refer to facts. Nothing is right or wrong, nothing has intrinsic value, and no one is virtuous or vicious. Error theory has been criticized for being counterintuitive. For example, the error theorist would say that it’s false that “murder is wrong.” However, error theorists can endorse “fictionalism” or continue to make moral statements for some other reason. (2) Any theory that requires us to reject the view that concepts of some domain refer to facts, but require us to agree that statements within that domain are meant to relate to facts. For example, an error theorist could reject all psychological facts and say that all psychological statements that we think refer to facts are false. It would then be false that “some people feel pain.”
essence – The defining characteristics of an entity or category. Aristotle argued that objects and animals have an essence. Aristotle’s understanding of essence is a lot like Platonic Forms except he considers it to be part of the object or animal rather than an eternal and immaterial object outside of space and time. For example, Aristotle says that the essence of human beings is “rational animal,” so human beings wouldn’t be human beings if they lacked one of these defining characteristics.
essential characteristic – Characteristics that are necessary to be what one is. For example, Aristotle argues that essential characteristic of Socrates is that he is capable of being rational because that is essential to being a human—if Socrates is not capable of being rational, then he is not a person. “Essential characteristics” are the opposite of “accidental characteristics.”
essentialism – The view that types of entities can be defined and distinguished using a finite list of characteristics. See “essence” for more information.
eternal return – The view that events will repeat themselves exactly as they occur now over and over ad infinitum in the future. Every person will live again and they will live the exact same life on and on forever. Sometimes the eternal return is presented as a possibility given that the universe has finite possibilities and infinite time.
ethical egoism – The view that people should only act in their rational self-interest. For example, an ethical egoist might believe that a person shouldn’t give money to the poor if she can’t expect to be benefited by it in any way.
ethical libertarianism – See “political libertarianism.”
ethics – The philosophical study of morality. Ethics concerns when actions are right or wrong, what has value, and what constitutes virtue.
etymological fallacy – A fallacy committed by an argument when a word is equivocated with another word it’s historically derived from. For example, “logic” is derived from “logos,” which literally meant “word.” It would be fallacious to argue that “logic” is the study of words just because it is historically based on “logos.”
eudaimonia – Greek for “happiness” or “flourishing.”
eudaimonism – Ethical theories concerned with happiness or flourishing. Eudaimonist theories of ethics tend to be types of “virtue ethics.” Eudaimonists tend to argue that we should seek our happiness (or flourishing), and that virtue is a necessary condition of being truly happy or flourishing. Socrates, Aristotle, Epicurus, and the Stoics are all examples of “eudaimonists.”
Euthyphro dilemma – A problem concerning whether something is determined by the interest of one or more gods or whether the interest of one or more gods is based on rational criteria. The Euthyphro dilemma was originally found in a Socratic dialogue called the Euthyphro where Socrates asked if what is pious was pious because the gods liked it or if the gods liked pious things because they were worthy of being liked. Now the “Euthyphro dilemma” is generally used to refer to what is right or wrong—is what is right only right because God likes it, or does God like what is right because of some rational criteria? Many people take this dilemma as a good reason to reject “divine command theory” and to think that what is right or wrong is based on rational criteria. If God exists, then perhaps God likes what is right because of the rational criteria.
evaluative – Concerning the value of things. Statements, judgments, or beliefs that refer to values. For example, “human life is intrinsically good” is an evaluative judgment.
evidence – See “justification.”
evidentialism – The view that beliefs are only justified if and when we have evidence for them.
ex nihilo – A Latin phrase meaning “out of nothing.” It is generally used to refer to the idea that something could come into existence from nothing, and such an idea is often said to conflict with the scientific principle known as the “conservation of energy.”
exclusive or – An “or” of a sentence that requires that only one of two propositions are true. It would be impossible for both propositions to be true. The form of the exclusive or can be said to be “either a or b, and not-a-and-b.” For example, “either something exists or nothing exists.” It would be impossible for both to be true or for neither to be true. The “exclusive or” is often contrasted with the “inclusive or.”
exclusive premises – A fallacy committed when categorical syllogisms have two negative premises. There are no logically valid categorical syllogisms with two negative premises. For example, “No dogs are fish. Some fish are not lizards. Therefore, no dogs are lizards.”
existential quantifier – A term or symbol used to say that something exists. For example, “some” or “not all” are existential quantifiers in ordinary language. “Some horses are mammals” means that at least one horse exists and “not all horses are male” means that there is at least one horse that is not a male. The existential quantifier in symbolic logic is “∃.” See “quantifier” for more information.
exportation – A rule of replacement that states that “if a and/or b, then c” means the same thing as “if a, then it’s the case that if b, then c.” (“a”, “b,” and “c” stand for any three propositions.) For example, “if Socrates is either a mammal or an animal, then Socrates is a living organism” means the same thing as “if Socrates is a mammal, then it’s the case that if Socrates is an animal, then Socrates is a living organism.”
expressive completeness – A logical system is expressively complete if and only if it can state everything it is meant to express. For example, a system of propositional logic with connectives for “and” and “not” is expressively complete insofar as it can state everything any other connective could state. You can restate “A and/or B” as “it’s not the case that both not-A and not-B.” (“Hypatia is a mammal and/or a mortal” means the same thing as “it’s not the case that Hypatia is both a non-mammal and a non-mortal.”) See “expressibility” and “logical connective” for more information.
extension – What a term refers to. For example, the “morning star” and “evening star” both have the same extension. “Extension” is often contrasted with “intension.”
extensionality – Exensionality is concerned with the reference of words. For example, “the morning star” and “the evening star” both refer to Venus, so they both have the same extensionality. “Extensionality” is often contrasted with “intensionality.” Also see “sense” and “reference” for more information.
existential fallacy – A fallacy that is committed by an argument that concludes that something exists based on the fact that something is true of every member of a set. The form of the existential fallacy is generally “all A are B. Therefore, some A are B.” For example, “All unicorns are horses. Therefore, there is a unicorn and it’s a horse.” Another example is, “All trespassers on this property will be fined. Therefore, there is a trespasser on this property who will be fined.”
existential import – The property of a proposition that implies that something exists. For example, Aristotle thought that the proposition “all animals are mammals” implied that “at least one mammal exists.” However, many logicians now argue that propositions of this type do not have existential import. See the “existential fallacy” for more information.
existentialism – A philosophical domain that focuses on the nature of the human condition. What it’s like to be a human being and what it means to live authentically are also of particular interest. Existentialist philosophers often argue (a) for the meta-philosophical position that philosophy should be a “way of life” as opposed to technical knowledge or essay writing; (b) that each person is ultimately “on their own;” (c) for the view that people should re-examine their values rather than rely on evaluative beliefs passed on by others; (d) that we have no essence, so we need to determine our purpose (or “essence”) through our actions; and (e) that being a human being is characterized by absolute freedom and responsibility.
explaining away – To reveal a phenomenon to not exist after all. To “explain away” a phenomenon is often counterintuitive, inconsistent with our experiences, or insensitive to our experiences. For example, someone who claims that beliefs and desires don’t actually exist because psychological phenomena are actually just brain activity of some sort would be counterintuitive and conflict with our experiences. Anyone who makes this claim should tell us why we seem to experience that beliefs and desires exist, and why these concepts are convenient when we want to understand people’s behavior. See “eliminative reductionism” for more information. “Explaining away” can be contrasted with “saving the phenomena.”
explanans – A statement or collection of statements that explain a phenomena. For example, “people who are hungry generally eat food” is an explanans and can explain why John ate two slices of pizza. “Explanans” is often contrasted with “explanandum.”
explanandum – A phenomenon that’s explained by a statement or series of statements. For example, John ate two slices of pizza is an explanandum, and we can explain that phenomenon by the fact that “people who are hungry generally eat food.” “Explanandum” is often contrasted with “explanans.”
explicit knowledge – Knowledge that enters into one’s consciousness and can be justified through argumentation by those who hold it. For example, scientists can explain how they know germs cause disease because they have explicit knowledge about it. “Explicit knowledge” is often contrasted with “tacit knowledge.”
expressibility – The ability of a logical system to express the meaning of our statements. For example, consider the argument, “all humans are mammals; all mammals are animals; therefore, all humans are animals. According to propositional logic, this argument has the form “A; B; therefore, C” and it would determine this argument to be logically invalid as a result. However, predicate logic is better able to capture the meaning of these statements and it can prove that the argument is logically valid after all. Therefore, predicate logic is expressively superior to propositional logic given this one example. See “valid argument” for more information.
expressive completeness – A logical system is expressively complete if and only if it can state everything it is meant to express. For example, a system of propositional logic with connectives for “and” and “not” is expressively complete insofar as it can state everything any other connective could state. You can restate “A and/or B” as “it’s not the case that both not-A and not-B.” (“Hypatia is a mammal and/or a mortal” means the same thing as “it’s not the case that Hypatia is both a non-mammal and a non-mortal.”) See “expressibility” and “logical connective” for more information.
extension – To exist in space and time, and to take up space. To have a body or physical shape.
externalism – See “epistemic externalism,” “motivational externalism,” or “semantic externalism.”
externalities – Unintended positive or negative effects on third parties by business transactions. For example, pollution is a negative externality caused by many business transactions. Many people who oppose having a free market without regulation argue that it would be unfair for third parties who are harmed by externalities to not be compensated, and that compensation might not be feasible without regulations.
extrinsic value – A type of value other than “intrinsic value.” For example, “instrumental value” and “inherent value” are types of extrinsic values. Sometimes we say that an action is extrinsically good insofar as it is instrumental for some intrinsic good. For example, eating food is generally good because it often helps us live happier or more pleasurable lives. However, eating food is extrinsically good rather than intrinsically good because eating food without some relation to pleasure, health, or happiness has no value.
fact – (1) A state of affairs, relation, or part of reality that makes a statement true. For example, it’s true that objects fall and will continue to fall because it’s a fact that “the law of gravity exists” and accurately describes reality. (2) A statement that is known to be true—at least by the experts. Facts of this sort are often contrasted with insufficiently justified “opinions.” (3) According to scientists, facts are observations (empirical evidence).
fact/value gap – Some philosophers believe that facts and values are completely different domains that can’t overlap. The fact/value gap refers to the distance they believe these two domains are from each other. This gap is especially important to people who believe that evaluative statements reflect our desires or preferences rather than to factual statements.
factual truth – Statements are factually true when they properly relate to facts or reality. For example, “something exists” is an uncontroversial example of a factually true statement. “Factual truth” can be contrasted with “nonfactual truth.”
faculty – (1) The ability to do something. For example, people have the faculty for rational thought. (2) The teachers of a school. For example, the faculty of a school.
fallacy – An error in reasoning. Formal fallacies are committed by invalid arguments and informal fallacies are committed by errors in reasoning of some other kind.
fallacy fallacy – (1) See “argumentum ad logicam.” (2) A type of fallacy committed by an argument that falsely claims another argument commits a certain fallacy. For example, Lisa could argue that “Sam is an idiot for thinking that only two people exist. We have met many more people than that.” Sam could then respond, “You have committed the ad hominem fallacy. My belief should not be dismissed, even if I am an idiot.” In this case Lisa’s argument does not require us to believe that Sam is an idiot. It is an insult, but it can be separated from her actual argument.
fallacy of composition – A fallacy committed by an argument that falsely assumes that a whole will have the same property as a part. For example, “molecules are invisible to the naked eye. We are made of molecules. Therefore, we are invisible to the naked eye.” The “fallacy of composition” is often contrasted with the “fallacy of division.”
fallacy of division – A fallacy committed by an argument that falsely assumes that a property that a whole has will also be a property of the parts. For example, “We can see humans with the naked eye. Humans are made of molecules. Therefore, we can see molecules with the naked eye.”
fallacy of the consequent – A synonym for “affirming the consequent.”
fallible – Beliefs or statements that possibly contain errors or inaccuracies.
fallibilism – The view that knowledge does not require absolute certainty or justifications that guarantees the truth of our beliefs. “Fallibilism” is the opposite of “infallibilism.”
false – A proposition that fails to be true, such as “1+1=3.” Propositions, statements, and beliefs can be false. “False” is the opposite of “true.”
false analogy – A synonym for “weak analogy.”
false cause – A synonym for “non causa pro causa.”
false conversion – A synonym for “illicit conversion.”
false dichotomy – A synonym for “false dilemma.”
false dilemma – A fallacious argument that requires us to accept fewer possibilities than there plausibly are. For example, we could argue the following—“All animals are mammals or lizards; sharks are not mammals; therefore, sharks are lizards.” False dilemmas are related to the “one-sidedness” fallacy and generally use the logical argument form known as the “disjunctive syllogism.”
false positive – A positive result that gives misleading information. For example, to test positive for having a disease when you don’t have a disease. Let’s assume that 1 of 1000 people have Disease A. If a test is used to detect Disease A and it’s 99% accurate, then it will probably detect that the one person has the disease, but it will also probably have around ten false-positive results—it will probably state that ten people have Disease A that don’t actually have it.
false precision – See “overprecision.”
falsifiability – The ability to reject a theory or hypothesis based on rational criteria. For example, Newton’s theory of physics was rejected on the basis of having more anomalies than an alternative—Einstein’s theory of physics.
falsification – To falsify a theory—to prove it false, likely to be false, or worthy of being rejected based on some rational criteria.
falsificationism – The view that scientific theories and hypotheses can be distinguished from pseudoscientific ones insofar as scientific theories and hypotheses can be falsified—they can be rejected on the basis of rational criteria. In particular, hypotheses can be rejected if they conflict with our observations more than the alternatives. For example, we could hypothesize that all swans are white and that hypothesis would be falsifiable because a single non-white swan would prove it to be false.
fast track quasi-realism – An attempt to make sense out of moral language (such language involving moral facts, mind-independence, and moral truth) without endorsing moral realism by explaining how all such moral language can be coherent without moral realism. “Fast track quasi-realism” is often contrasted with “slow track quasi-realism.” See “quasi-realism” for more information.
fate – (1) A fated event is an inevitable event that will occur no matter what we do. For example, every choice we make will lead to our death; so it’s plausible to think we are all fated to die. Sometimes it’s thought that a fated event is inevitable because of a divine influence. Fate can be said to be a separate concept from “determinism” in that a determinist does not necessarily think that everything that happens will happen no matter what choices we make. (2) “Fate” is another term the Stoics used for the concept of “Universal Reason.” (3) In ordinary language, “fate” is often synonymous with “destiny.”
faulty analogy – A synonym for “weak analogy.”
feminism – The view that women should be treated as equals to men, that they have been systematically treated unjustly, that we should demand greater justice for women, and that we should combat sexism.
fictionalism – (1) The meta-ethical view that moral judgments refer to a fictional domain, and moral statements can be true or false depending on whether or not they accurately refer to the fictional domain. According to fictionalism, it would be true that “murder is wrong,” but only insofar as people agree that it’s wrong (perhaps because a social contract says so). (2) Any domain where statements are meant to refer to a fictional domain. Statements within that domain are true or false depending on whether or not they accurately refer to the fictional domain. For example, we find it intuitive to say that it’s true that unicorns are mammals and that Sherlock Holmes is a detective.
final cause – The purpose of a thing, action, or event. The final causes of scissors are to cut, and Aristotle thought that the final cause of human beings is to use (and improve) their capacity to reason.
final end – Something we psychologically accept to be worthy of desire or valuable for its own sake. For example, money is not a final end, but happiness is often said to be one. If someone asks why you need money, you might need to explain what you will do with the money to justify the need, but “happiness” seems to be worthy of desire without an additional justification. Final ends are often said to be important because if there are no final ends, then there seems to be nothing that makes a decision more ethical than another. A person who wants to get money to get food, wants food to live longer, and wants to live longer just to get money seems to be living a meaningless life. None of these goals are in any sense worthy on their own.
first-person point of view – The perspective of a person as having a unified field of experience that brings together various experiences within a single perspective (e.g. that can be used experience sight and touch at the same time). The first-person point of view is also often said to be unified in time—we experience things only because there’s a before and after. If we didn’t experience that our experiences are unified in time, then we could not observe objects moving and we couldn’t even experience that an object is “the same object it was earlier.” The “first-person point of view” is often contrasted with the “third-person point of view.”
folk psychology – The everyday or common sense understanding of psychology involving the concepts of “belief” and “desire.” Some philosophers argue that “folk psychology” is false and we should only examine brain activity to know what facts of psychology are really about.
forbidden – A synonym for “impermissible.”
formal cause – The reason that something exists and/or the properties something will have if it perfects itself. For example, a seed’s formal cause is to transform into a plant. Some philosophers argue that “formal causes” are identical to “final causes.”
formal fallacy – An error in reasoning committed by a logically invalid argument. For example, any argument with the form “if a, then b; b; therefore, a” is logically invalid.
formal language – Languages that are devoid of semantics, such as the languages used for formal logic. See “formal logic” and “formal system” for more information. “Formal language” can be contrasted with “natural language.”
formal logic – Logic concerned with logical form, validity, and consistency. “Formal logic” is often contrasted with “informal logic.” See “logical form” for more information.
formal semantics – A domain concerned with the interpretations of formal propositions. For example, “A and B” could be interpreted as “all lizards are reptiles and all dogs are mammals.” (“A” and “B” each represent specific propositions.) See “interpretation,” “translation,” “models,” and “schemes of abbreviation” for more information.
formal system – A syntax-based system generally used for logic or mathematics. Formal systems require people to follow rules and manipulate symbols in order to try to prove something in logic or mathematics, and no subjective understanding of the words are required.
formalism – The view in philosophy of mathematics that mathematics is nothing more than a set of rules and symbols. Mathematics as it exists in computers is an entirely formal system in this way. Some philosophers argue that there is more to mathematics than what computers do.
forms – See “Platonic Forms.”
foundational – The starting point or building blocks that everything else depends on. For example, a foundational belief can be justified without inferential reasoning or argumentation. The axioms of logic are a plausible example of foundational beliefs.
foundationalism – The view that there are privileged or axiomatic foundational beliefs that need not be proven. The source of privileged beliefs could be from self-evidence, non-inferential reasoning, non-empirical intuitive evidence. Foundationalism is one possible solution to the problem of justification requiring an infinite regress or circular reasoning. If everything we know needs to be justified from an argument, then we need to prove our beliefs using arguments on and on forever, or we need to be able to justify beliefs with other beliefs in a circular mutually supportive fashion. However, foundationalism requires us to reject that everything we know must be justified with inferential reasoning.
foundherentism – A view that we should accept a view that combines elements of both foundationalism and coherentism. Foundherentism uses various experiences or observations as a foundational origin of belief, but it (a) allows foundational beliefs to be mutually supportive, and (b) allows us to reject foundational beliefs that are inconsistent with the others. Foundherentism is one way we can try to correct potentially inaccurate beliefs that are based on theory-laden observations (that what we observe is interpreted by us and our assumptions shape how we interpret them). The fact that observations are theory-laden seems to imply that beliefs based on our observations are fallible.
free logic – Formal logical systems that can discuss objects or categories without requiring us to assume that the objects or categories exist.
free rider – Someone who benefits from a collective action between people without doing the work done by the others. For example, a person could benefit from laws against pollution but refuse to abide by those laws to increase the profit of her company.
function – (1)The purpose of something. For example, the function of a knife is to cut things. (2) A synonym for “operation.”
functionalism – Theories of philosophy of mind that state that psychological states are identical with (or “constituted by”) some functional role played within a system (such as a brain). According to many funtionalists, both a machine and a human brain might have the same psychological states as long as they both have the same functional activities.
fuzzy logic – (1) Logical systems that use degrees of truth concerning vague concepts. For example, some people are more bald than others. Someone with no hair at all would be truthfully said to be bald, but someone with only a little hair might be accurately described as being bald (with a lesser degree of truth involved). (2) In ordinary language, “fuzzy logic” often refers to poorly developed logical reasoning.
gambler’s fallacy – Fallacious reasoning based on the assumption that the past results of a random game will influence the future results of the game. For example, if you toss a coin and get heads twice in a row and conclude that you are more likely to get tails if you keep playing. Gamblers who lose a lot of money often have this assumption when they make the mistake in thinking that they will more likely start winning if they keep playing the same game.
genetic fallacy – A fallacy committed by arguments that conclude something solely based on the origin of something else. For example, it would be fallacious to argue that someone will be a Christian just because her parents were Christians; or that someone’s belief in evolution is unjustified just because her belief originated from casual conversations rather than from an expert.
god – A god is a very powerful being. Some people believe gods to be eternal and unchanging beings that created the universe, but others think gods to be part of (or identical to) the universe. Theists believe at least one god exists and atheists believe that no gods exist. Monotheists think that only one god exists, polytheists think more than one god exists, and pantheists think god is identical to the universe. Traditional monotheists often believe that God is all-good, all-powerful, all-knowing, and existing everywhere.
golden mean – Aristotle’s concept of virtues as being somewhere between two extremes. For example, moderation is the character trait of wanting the right amount of each thing, and it’s between the extremes of gluttony and an extreme lack of concern for attaining pleasure.
golden rule – A moral rule that states that we ought to treat other people how we want to be treated. For example, we generally shouldn’t punch other people just because they make us angry insofar as we wouldn’t want them to do it either.
The Good – Plato’s term for the Form of all Forms. It is the ultimate being that all other types of reality depend on for their existence, and it is the ultimate ideal that determines how everything should exist. The Good is thought to be nonphysical and eternal. It’s also known by Neoplatonists as the “One” or the “Monad.” See “Plato’s Forms” and “emination” for more information.
good argument – An argument that’s rationally persuasive. A popular example of a good argument is “Socrates is a man. All men are mortal. Therefore, Socrates is mortal.” Good arguments give us a good reason to believe a conclusion is true. If an argument is sufficiently good, then we should believe the conclusion is true. Ideally good arguments rationally require us to believe the conclusion is true, but some arguments might only be good enough to assure us that a belief is compatible with rationality. The criteria used to determine when an argument is “good” is studied by logicians and philosophers.
good will – (1) To have good intentions. (2) According to Immanuel Kant, good will is being rationally motivated to do the right thing.
grandfather’s axe – A thought experiment of an axe for which all the parts that have been replaced. The question is whether or not it’s the same axe.
greatest happiness principle – The moral principle that states that we ought to do what will lead to the greatest good for the greatest number. In this case “goodness” is equated with “happiness” and “harm” is equated with “suffering.” So, the greatest happiness principle states that we ought to maximize happiness and minimize suffering for the greatest number of people. We can judge moral actions as right and wrong in terms of how much happiness and suffering the action will cause. Actions are right insofar as they maximize happiness and reduce suffering, and wrong insofar as they maximize suffering and reduce happiness. See “utilitarianism” for more information. John Stewart Mill’s utilitarian notion of the “greatest happiness principle” is meant to be contrasted with Jeremy Bentham’s form of utilitarianism insofar as Mill believes that there are higher and lower qualities of pleasure (unlike Bentham). In particular he believes that intellectual pleasures are of a higher quality and value than bodily pleasures. To emphasize this view, Mill said, “Better to be Socrates dissatisfied than a pig satisfied.”
grouping – In logic, grouping is used to make it clear how logical connectives relate to various propositions, and parentheses are often used. For example, “A or (B and C)” groups “B and C” together. In this case “A” can stand for “George Washington was the first president of the United States,” “B” can stand for “George Washington is a mammal,” and “C” can stand for “George Washington is a lizard.” In this case the statement can be interpreted as “George Washington is a lizard, or he is both a mammal and a lizard.” This can be contrasted with the statement “(A or B) and C,” which would be interpreted as “George Washington is either the first president of the United States or a mammal, and he’s a lizard.”
grue – A theoretical color that currently looks green, but will look blue at some later point in time. Grue is used to illustrate a problem of induction—how do we know emeralds will be green in the future when they might actually be grue? They might appear green now and then appear blue at some later point.
guilt by association – A synonym for “association fallacy.”
gunk – Any type of stuff that can be indefinitely split into smaller pieces. Gunk can be made of smaller parts without an indivisible or indestructible “smallest part” (i.e. atom). Philosophers speculate whether or not everything in the physical universe is made of gunk or atoms.
gunky time – The view of time as being infinitely divisible. If time is gunky, then there is no such thing as a shortest moment of time.
halo effect – The cognitive bias defined by our tendency to expect people with positive characteristics to have other positive characteristics, and people with negative characteristics to have other negative characteristics. For example, those who agree with us are more likely to be believed to be reasonable than those who disagree with us. The halo effect sometimes causes people to think of outsiders who think differently to be inferior or evil, and it makes it more likely for people to dismiss the arguments of outsiders with differing opinions out of hand.
halt – (1) When a mechanical procedure ends. For example, we can use truth tables to know if arguments are valid. The procedure halts as soon as we determine whether or not the argument is valid. See “valid argument” and “truth table” for more information. (2) In ordinary language, ‘halt’ means “stop.”
hard determinism – The view that determinism is incompatible with free will, that the universe is deterministic, and that people lack free will.
hard atheism – The traditional view of atheism as the belief that gods don’t exist. Hard atheism is contrasted with “soft atheism.”
hasty generalization – A fallacious argument that concludes something because of insufficient evidence. Hasty generalizations conclude that something is true based on various observations when the observations are not actually a sufficient reason to believe the conclusion is true. For example, to conclude that all birds use their wings to fly based on seeing crows and swans would be a hasty generalization. Not all generalizations are fallacious. See “induction” for more information.
hedonism – The view that pleasure is the only thing worthy of desire in itself and pain is the only thing worthy of avoidance in itself. Some hedonists might think that pleasure and pain are the only things with intrinsic value, but others might think their value is purely psychological—that pleasure is something people universally desire to attain and pain is something people universally desire to avoid.
Hegelian Dialectic – The view that progress is continually made in history when people find ways of attaining greater freedom. Various systems and institutions are often proposed to improve people’s freedom, but they face various problems that prevent freedom from being perfectly enjoyed by everyone (which leads to revolts and revolutionary wars). New and better systems and institutions are then created and the process continues. The first societies were thought to be based on slavery, greater freedom was found within feudalism, and even greater freedom was found in capitalism. Hegelian Dialectic developed the notions of “class conflict” and “social progress.” See “dialectic” for more information.
hermeneutic circle – Interpreting a text by alternating between considering parts and the whole of the text. For example, we can’t understand the definitions given in dictionaries without considering the definition it gives of several different words and how they all relate. Some philosophers have suggested that the totality of human knowledge is like a hermeneutic circle insofar as we can’t interpret our experiences without referring to other assumptions and experiences within a worldview.
hermeneutics – (1) The systematic study of interpretation regarding texts. (2) Philosophical hermeneutics is the systematic study of interpretations regarding linguistic and nonlinguistic expressions as a whole. For example, an issue of philosophical hermeneutics is, “Why is communication possible?”
heuristic – Experience-oriented techniques for finding the truth, such as a rule of thumb or intuition. For example, thought experiments are used to bring out an intuitive response.
heuristic device – An entity that exists to increase our knowledge of another entity. For example, models might never perfectly correspond to the reality they represent, but they can make it easier for us to understand aspects of reality. Allegories, analogies, and thought experiments are also heuristic devices.
hidden assumption – A synonym for “unstated assumption.”
hidden conclusion – A synonym for “unstated conclusion.”
hidden premise – A synonym for “unstated premise.”
historical dialectic – The view that history is a process that offers various ways of living that face problems, and new and improved ways of living are introduced to avoid the problems the old ones had. This is one way to understand historical progress or “cultural evolution.” “Hegelian dialectic” and “dialectical materialism” could both be considered to be types of “historical dialectic.”
horned dilemma – An objection that shows why a claim can be interpreted or defended in more than one way, but none of those solutions are acceptable. For example, consider the following statement—“This statement is false.” If the statement is false, then it’s true. If it’s true, then it’s false. Neither of these solutions are acceptable because they both lead to self-contradictions.
hot hand fallacy – An argument commits this fallacy when it requires the false assumption that good or bad luck will last a while. For example, a gambler who wins several games of poker in a row is likely to think she’s on a “winning streak” and is more likely than usual to keep winning as a result.
humanism – An approach to something that focuses on the importance of human concerns and away from other-worldly concerns. For example, a humanist would likely be unsatisfied with having religious rituals that are meant to honor the gods, but don’t benefit human beings in any way. Also see “secular humanism” and “religious humanism.”
hypothesis – A defensible speculative explanation for various phenomena. “Hypotheses” are often contrasted with “theories,” but the term ‘theory’ tends to be used to describe hypotheses that have been systematically defended and tested without facing significant counter-evidence.
hypothetical imperative – Imperatives are commands or requirements. Hypothetical imperatives are things we are required to do in order to fulfill our desires or goals. For example, if you are hungry, then you have a hypothetical imperative to get some food to eat. “Hypothetical imperatives” are often contrasted with “categorical imperatives.”
hypothetical syllogism – A rule of inference that states that we can use “if a, then b” and “if b, then c” to validly conclude “if a, then c.” (“a” and “b” stand for any two propositions.) For example, “if all dogs are mammals, then all dogs are animals. If all dogs are animals, then all dogs are living organisms. Therefore, if all dogs are mammals, then all dogs are living organisms.”
hypothetico-deductive method – To start with a hypothesis, consider what conditions or observations would be incompatible with the hypothesis, then set up an experiment that could cause observations that are incompatible with the hypothesis. For example, we could hypothesize that objects continue to move in the same direction until another force acts on it, we could consider that dropping an object on a moving sailing ship should cause the object to continue to move along the path of the sailing ship, and then we can set up an experiment consisting of dropping objects while on sailing ships. The hypothetico-deductive method is a common form of the scientific method.
I-type proposition – A proposition with the form “some a are b.” For example, “some cats are female.”
idea – (1) See “Platonic Forms.” (2) According to Immanuel Kant, a concept of reason that can’t be fully understood through experience alone. (3) In ordinary language, an “idea” is a concept or thought.
ideal observer – A fully-informed and perfectly rational agent that deliberates about a relevant issue in the appropriate way. An ideal observer would have a superior perspective concerning what we should believe concerning each moral issue. It is often thought that ideal observers would determine the social contract that we should agree with, and perhaps all moral truth depends on such a contract. See “ideal observer theory” and “meta-ethical constructivism” for more information.
ideal observer theory – A form of meta-ethical constructivism that states that moral statements are true if an ideal observer would agree with them and false when an ideal observer wouldn’t agree. For example, an ideal observer would likely agree that it’s true that “it’s morally wrong to kill people whenever they make you angry.” A potential example is John Rawls’s “Justice as Fairness.”
idealism – The view that there is ultimately only one kind of stuff, and it’s not material (it’s not physical). Reality might ultimately be a dream-world or Platonic Forms. See “Platonic Forms” or “subjective idealism” for more information.
identity theory – A theory or hypothesis that states that two things are identical. For example, some people think that the psychological states are identical to certain brain states and scientists agree that water is identical to H2O.
ignosticism – The view that we can’t meaningfully discuss the existence of gods until an adequate and falsifiable definition of “god” is presented. “Ignosticism” is often taken to be synonymous with “theological noncognitivism.”
illicit affirmative – The fallacy committed when categorical syllogisms have positive premises and a negative conclusion. All categorical syllogisms with this form are logically invalid. For example, “Some dogs are mammals. All mammals are animals. Therefore, some dogs are not animals.”
illicit contraposition – In categorical logic, illicit contraposition refers to a fallacy committed by an invalid argument that switches the terms of a categorical statement and negates them both. There are two types of illicit contraposition: (a) No a are b. Therefore, no non-b are non-a. (b) Some a are b. Therefore, some non-b are non-a. For example, “Some horses are non-unicorns. Therefore, some unicorns are non-horses.”
illicit conversion – Invalid forms of conversion—invalid ways to switch the terms of a categorical statement. There are two types of illicit conversion: (a) All a are b. Therefore, all b are a. (b) Some a are not b. Therefore, some b are not a. For example, the following is an invalid argument—“Some mammals are not dogs. Therefore, some dogs are not mammals.”
illicit major – A fallacy committed by an invalid categorical syllogism when the major term is undistributed in the major premise, but it’s distributed in the conclusion. For example, the following argument commits the illicit major fallacy—“All lizards are reptiles; no snakes are lizards; therefore, no snakes are reptiles.” See “distribution” for more information.
illicit minor – A fallacy committed by an invalid categorical syllogism when the minor term is undistributed in the minor premise, but it’s distributed in the conclusion. For example, the following argument commits the illicit minor fallacy—“All dogs are mammals; all dogs are animals; therefore, all animals are mammals.” See “distribution” for more information.
illicit negative – The fallacy committed when categorical syllogisms have one or two negative premises and a positive conclusion. All categorical syllogisms with that form are logically invalid. For example, “No fish are mammals. Some mammals are dogs. Therefore, some fish are dogs.”
illicit process – A fallacy committed when categorical syllogisms have a term distributed in the conclusion without being distributed in a premise. All categorical syllogisms that commit this fallacy are logically invalid. For example, “All lizards are reptiles. Some reptiles are lizards. Therefore, all reptiles are lizards.” See “distribution,” “illicit major” and “illicit minor” for more information.
illicit transposition – A synonym for “improper transposition.”
illocutionary act – The act of communication with some intention. For example, to get people to do something, to persuade, to educate, or to make a promise.
illocutionary force – The intended semantic meaning of a speech act. For example, someone could say, “I can see the morning star” without knowing that the morning star is Venus.
illusory superiority – The cognitive bias defined by people’s tendency to think they have above average characteristics in all areas. People tend to overestimate their abilities and underestimate the abilities of others. For example, people are likely to think they have a higher IQ than they really do. This bias is related to the “self-serving bias.”
immeasurable – The quality of something that can’t be measured or quantified.
immanence – Presence within the physical universe. Some people believe God is immanent. “Immanence” is often contrasted with “transcendence.”
immoral – A synonym for “morally wrong.”
impartial spectator – Someone with a moral point of view who has no bias to grant favoritism to any side of a conflict or competition. The concept of an “impartial spectator” is generally found in ethical systems that lack moral facts and claim that emotion plays an important role in determining right and wrong. It is then said that what is right or wrong depends on what an “impartial spectator” would think is right or wrong in that situation, which could depend on the emotions of the impartial spectator. The “impartial spectator” is often often used as a synonym for the “ideal observer.”
impartiality – Without bias, nonrational preference, or favoritism. Decisions are impartial if they’re based on rational principles rather than subjective desires.
imperative – A command or prescription for behavior. See “categorical imperative” and “hypothetical imperative” for more information.
imperfect duty – A duty that can be manifested in a variety of ways and allows for personal choice. For example, Immanuel Kant argues that we have an imperfect duty to develop our talents and help others. It is imperfect because we have to choose how to develop our talents and help others. Additionally, these duties are limited because we would otherwise be required to spend our entire lives relentlessly developing our talents and helping others, but that would be too demanding on us. “Imperfect duties” contrast with “perfect duties.”
impermissible – What is forbidden, or what is not allowed, or what we are obligated not to do. Something is impermissible when it falls short of certain relevant standards. Impermissible beliefs are incompatible with rationality and impermissible actions are incompatible with moral requirements. We are obligated not to believe something that’s epistemically impermissible, and we are obligated not to do something that’s morally impermissible. “Impermissible” beliefs and actions are often contrasted with “permissible” or “obligatory” ones.
impossibility – The property of being not possible. Impossible things are neither possible, contingent, nor necessary. See “physical impossibility,” “metaphysical impossibility,” and “logical impossibility” for more information.
implication – (1) The logical consequences of various beliefs. For example, the implication of “all cats are mammals” and “if all cats are animals, then all cats have DNA” is “all cats have DNA.” The implication could be said to be implied by the other propositions. (2) A conditional proposition or state of affairs. See “material conditional.” (3) A rule of replacement that states that “if a, then b” and “not-a and/or b” both mean the same thing. (“a” and “b” stand for any two propositions.) For example, “if dogs are lizards, then dogs are reptiles” means the same thing as “dogs are not lizards, and/or dogs are reptiles.”
implicit knowledge – A synonym for “tacit knowledge.”
improper transposition – A logically invalid argument with the form “If a, then b. Therefore, if not-a, then not-b.” For example, “If all lizards are mammals, then all lizards are animals. Therefore, if not all lizards are mammals, then not all lizards are animals.” See “transposition” for more information.
inadvisable – See “suberogatory.”
inclusive or – An “or” used to designate that either one proposition is true or another is true, and they might both be true. The logical form of an inclusive is “either a or b, or a-and-b.” For example, “either Socrates is a man or he has two legs” allows for the possibility that Socrates is both a man and something with two legs, but it doesn’t allow for the possibility that Socrates is neither a man nor something with two legs. We often use the term “and/or” to refer to the “inclusive or.” People often contrast the “inclusive or” with the “exclusive or.”
incommensurability – A feature of various things that makes it impossible to determine which is superior or overriding. For example, it’s impossible to rationally determine if one value is superior to another if they’re incommensurable; and it could be impossible to rationally determine if one theory is superior to another if they’re incommensurable. We can assume that pleasure and human life both have value, but we might not be able to know for sure if a longer life with less pleasure would be better than a shorter life with more pleasure.
incompatibilism – The view that free will and determinism are not compatible. Incompatibilists that believe in free will are “libertarians” and those who reject free will are “hard determinists.”
inconsistent: Beliefs or statements that form a contradiction. See “contradiction” for more information.
incorrigible – The feature of a proposition that makes the proposition necessarily true simply because it’s believed. A plausible example is Rene Descartes’s argument, “I think therefore I am.” If we think it, then it seems like it has to be true.
indefeasible – An argument that can’t be defeated by additional information. Indefeasible arguments are sufficient reasons to believe a conclusion and no additional information could provide a better reason to reject the conclusion. The opposite of “indefeasible” is “defeasible.”
indeterminism – The view that not everything is causally determined. A rejection of “determinism.” For example, some philosophers and scientists believe that quantum mechanics is evidence for indeterminism. The behavior of subatomic particles seems to be random and unpredictable.
index of points – A set of points. Some philosophers believe that the truth conditions of necessity and possibility are based on an index of points. Aristotle thought that something was necessary if and only if it is true at all times, and possible if and only if it is true at some time. It’s necessary that “1+1=2” because it’s true at all times, and it’s possible for a person to jump over a small rock because it’s true at some time. In this case the index of points refers to points in time. See “truth conditions” for more information.
indexicals – Linguistic expressions that shift their reference depending on the context, such as “here,” “now,” and “you.” Indexical reference points to something and does not rely on describing the reference. Our descriptions of objects are often wrong, but we can still talk about the objects by using indexicals. For example, a person would be wrong to describe water as “the type of stuff that’s always a liquid that we use for hydration” insofar as water is not always a liquid, but we could still talk about water by pointing to it.
indirect proof – A strategy used in natural deduction used to prove an argument form is logically valid consisting of assuming the premises of an argument are true, but the conclusion is false. If this assumption leads to a contradiction, then the argument form has been proven to be logically valid. For example, consider the argument form “If A, then B. A. Therefore, B.” (“A,” “B,” and “C” are specific propositions.) An indirect proof of this argument is the following:
Assume the premises are true and the conclusion is false (not-B is true).
We know that “if A, then B” is true, and B is false, so A must be false. (See “modus tollens.”)
Now we know that A is true and false.
But that’s a contradiction, so the original argument form is logically valid.
induction – To generalize based on a sample. The view that the future will resemble the past in order to arrive at conclusions. For example, a person who only sees white swans could conclude that all swans are white. Also, a person who knows that bread has always been nutritious could conclude that nearly identical types of bread will still be nutritious tomorrow. Not all inductive reasoning is well-reasoned. See “hasty generalization” for more information. “Induction” is often contrasted with “deduction.”
inductive arguments – Arguments that use inductive reasoning to come to conclusions. See “induction” for more information.
inductive reasoning – See “induction.”
inductive validity – A synonym for “strong argument.”
infallible – Free of error and absolutely accurate. The opposite of “infallible” is “fallible.”
infallibilism – The view that to know something is to have a true belief that has been justified in a way that guarantees that the belief is true. This view equates knowledge with absolute certainty. The opposite of “fallibilism.”
inference – Coming to a conclusion from various propositions. For example, a person who knows that “all birds are warm-blooded” and “all crows are birds” could infer that “all crows are warm-blooded.” See “deduction” and “induction” for more information.
inferential reasoning – Reasoning that takes the form of argumentation (premises that give evidence for conclusions). To draw inferences from various beliefs. For example, a person who knows that “all men are mortal” and that “Socrates is a man” can realize that “Socrates is mortal.” Both “deduction” and “induction” are forms of inferential reasoning. Sometimes “inferential reasoning” is contrasted with “noninferential reasoning.”
infinite regress – (1) When a proposition requires the support of another proposition, but the second proposition requires the support of a third proposition, on and on forever. The implication is that infinite propositions are required to justify any other proposition. Philosophers often discuss infinite regresses as being an objectionable implication of certain beliefs, but some philosophers argue that not all infinite regresses are “vicious” (a reason to be rejected). For example, the belief that rational beliefs must be proven to be true requires an infinite regress to justify any proposition, but it is likely impossible for a person to actually justify a belief this way insofar as it would require infinite justifications. We would have to justify a proposition with an argument consisting of at least one other proposition, but then we would have to justify the second proposition with another argument, ad infinitum. (2) A process with no beginning or end. For example, it’s possible that the universe always existed and always will exist. Assuming every state of the universe causes the future states of the universe, there is a causal chain consisting of an infinite series of events with no beginning or end.
infinitism – The view that we are never done justifying a belief because every belief should be justified by an argument, but arguments have more premises that must also be justified. That requires us to justify our beliefs on and on forever. Imagine that you justify a belief with an argument, such as “we generally shouldn’t punch people because we generally shouldn’t hurt people.” Someone could then require us to justify our premise (that we generally shouldn’t hurt people). We could then say that “we generally shouldn’t hurt people because it causes suffering.” Someone could then want to know why this premise is justified (that hurting people causes suffering). This can go on and on forever. In order to know something, the infinitist believes that we will have to meet an infinite regress by having infinite justifications. However, infinitists don’t believe that the regress is vicious (a reason to reject their theory). See “vicious regress” for more information.
informal fallacy – An error in reasoning committed by an argument that is not merely a “formal error” (being an invalid argument).
informal logic – The domain of logic concerned with natural language rather than argument form. Informal logic covers critical thinking, argument analysis, informal fallacies, argument identification, identifying unstated assumptions, and the distinction between deductive and inductive reasoning. Informal logic generally excludes controversial issues related to the nature of knowledge, justification, and rationality. “Informal logic” can be contrasted with “formal logic.”
inherent value – Something that could help cause intrinsically good states to exist, but does not necessarily cause anything intrinsically good to exist. For example, a beautiful painting might be inherently good insofar as it can help cause intrinsically good experiences, but it might be hidden away in an attic and never cause any intrinsically good states. “Inherent value” is a type of “extrinsic value.”
innate ideas – Concepts or knowledge that we are born with. For example, Rene Descartes thought that we are born with the concept of perfection and could innately know that existence is a perfection. If innate ideas exist, then we have to reject “empiricism.”
innatism – The view that “innate ideas” exist.
inner sense – Our ability to experience states of the mind as opposed to the external world. “Inner sense” can be contrasted with “outer sense.”
intentional objects – The object that our thoughts or experiences refer to. For example, seeing another person involves an intentional object outside of our mind—another person. Some intentional objects are thought to be abstract entities, such as numbers or logical concepts.
inverse – An if/then proposition that is inferred from another if/then proposition. It is valid to conclude that one if/then proposition can be inferred from another whenever they both mean the same thing. It is valid to conclude from any proposition with the form “if a, then b” that “if not-b, then not-a.” For example, we can infer that “if it is false that all dogs are animals, then it is false that all dogs are mammals” from the fact that “if all dogs are mammals, then all dogs are animals.” “Transposition” is the name given to valid rules of inference using an inverse.
inversion – To infer an if/then proposition from another if/then proposition. See “inverse” for more information.
irrealism – A synonym for “anti-realism.”
institutional fact – Facts that exist because of collective attitudes or acceptance. For example, the value of money is an institutional fact and money would have no value if people didn’t agree that it has value. Institutions, such as the police force, government, and corporations all depend on institutional facts (because they can only exist due to collective attitudes and acceptance).
instrumental value – The usefulness of something. For example, knives have instrumental value for cutting food.
instrumentalism – A form of scientific anti-realism that claims that we should use the concept of unobservable scientific entities if they are useful within a theory or model, and we should not concern ourselves with whether such entities actually exist. For example, electrons are an important part of our scientific theories and hypotheses, so instrumentalists would agree that we should continue to talk about electrons and use them when conceptually thinking about our theories. Even so, instrumentalists would not claim that electrons exist.
intellectual virtues – Positive characteristics that help us reason well, such as open-mindedness, skepticism, perception, and intuition. “Virtue epistemology” is concerned with our intellectual virtues. “Virtue reliabilism” and “virtue responsibilism” are two different views about intellectual virtues and they require that intellectual virtues cover differeing domains.
intentionality – Also called “intentionality with a ‘t.’” The ability of thought to refer to or be about things. Philosophers discuss intentionality when they want to understand what it means to refer to objects or how we can refer to objects. Some philosophers also argue that there are “intentional objects” that are abstract or non-existent. (For example, numbers could be abstract intentional objects.)
intension – What a term means or how a word refers to things, which is often given in terms of a description. For example, the intension of “the morning star” is “the last star that can be seen in the morning” and the intension of “the evening star” is “the first star we can see at night.” Therefore, they both have a different intension, even though they both refer to Venus. “Intension” is often contrasted with “extension.”
intensionality – Also called “intensionality with an ‘s.’” Intensionality refers to the meaning and reference of words. Sometimes what a word means is different from what it refers to. For example, “the morning star” and “the evening star” both refer to Venus, but the meaning of the terms are different, so they both have a different intension. (The “morning star” is the last star we can see in the morning and the “evening star” is the first star we can see at night.) “Intensionality” can be contrasted with “extensionality.” See “sense” and “reference” for more information.
interchange – In categorical logic, interchange is the act of switching the first and second term of a categorical statement. For example, the interchange of “all men are mortal things” is “all mortal things are men.” See “conversion” for more information.
internalism – See “epistemic internalism,” “motivational internalism,” or “semantic internalism.”
interpretation – (1) To attribute meaning to statements of a formal logical system. Formal logical statements are devoid of content, but we can add content to them in order to transform them into statements of natural language. For example, “A or B” is a statement of a formal logical system, and it can be interpreted as stating, “either evolution is true or creationism is true.” In this case “A” stands for “evolution is true” an “B” stands for “creationism is true.” See “formal semantics,” “models,” and “schemes of abbreviation” for more information. (2) To try to understand information when there are multiple ways of doing so. Information can be ambiguous or vague, so interpretation can be necessary to attempt to understand them properly. For example, a person who sees the Sun set could think that they are seeing the Sun go around the Earth or they could think that they are seeing the Earth spin and turn away from the Sun as a result. See “theory-laden observation” and “ambiguity” for more information.
intrinsic value – Something with value just for existing. We might say happiness is “good for its own sake” to reflect that it is good without merely being useful to help us attain some other goal. If something is intrinsically good, then it is something we should try to promote. For example, if human life is intrinsically good, then all things equal, saving lives would plausibly be (a) rational, (b) a good thing to do, and (c) the right thing to do.
introspection – An examination of our first-person experiences. For example, we can reflect about what it’s like to feel pain or what it’s like to see the color green.
intuition – A form of justification that is difficult to fully articulate. A belief is strongly intuitive when rejecting it seems absurd (i.e. lead to counterintuitive implications), and a belief is weakly intuitive when accepting it doesn’t seem to conflict with any of our strongly intuitive beliefs. For example, we intuitively know that “1+1=2,” even if we can’t explain how we know it; and it’s counterintuitive to think it’s always morally wrong to give to charity. Some philosophers think we can know if a proposition is “self-evident” from intuition.
intuition pump – A thought experiment designed to make a certain belief seem more intuitive. For example, Hilary Putnam’s Twin Earth thought experiment asks us to imagine that there’s another world exactly like the Earth except water is replaced by another chemical that seems to be exactly like water except it’s not made of H2O. He argues that it’s intuitive to think that the chemical is not water despite the fact that all our experiences of it could be identical.
intuitionism – See “mathematical intuitionism,” “epistemic intuitionism, “meta-ethical intuitionism,” and “Ross’s intuitionism.”
invalid – See “invalid argument” or “invalid logical system.”
invalid argument – An argument form that can have true premises and a false conclusion at the same time. An example of an invalid argument is the following—“Socrates is either a man or a mortal. Socrates is a man. Therefore, Socrates is not a mortal.” “Invalid argments” are the opposite of “valid arguments.” See “logical form” for more information.
invalid logical system – A logical system that has one or more invalid rule of inference. If a logical system is invalid, then it’s possible for true premises to be used with the rules of inference to prove a false conclusion. “Invalid logical systems” are the opposite of “valid logical systems.” See “rules of inference” for more information.
inverse error – A synonym for “denying the antecedent.”
ipso facto – Latin for “by the fact itself.” It refers to something that is a direct consequence of something else. It means something similar to the phrase “in and of itself.” For example, people who lack drivers licenses ipso facto can’t legally drive.
irreducible – Something is irreducible if it can’t be fully understood in terms of something else, or if it’s greater than the sum of its parts. We can’t find out “it was actually something else.” We found out that water could be reduced to H2O, so water was reducible to facts of chemistry. However, some philosophers argue that minds are irreducible to facts of biology, and that morality is irreducible to social constructs. See “emergentism” for more information.
is/ought gap – The difference between what is the case and what ought to be the case. It is/ought gap is discussed by those who believe that morality is a totally different domain from other parts of reality, and/or that we can’t know moral facts from non-moral facts.
jargon – Technical terminology as used by specialists or experts. Jargon terminology is not defined in terms of common usage—how people generally use the words in everyday life. Instead, they are defined in ways that are convenient for specialists. For example, logicians, philosophers, and other specialists define “valid argument” in terms of an argument form that can’t possibly have true premises and a false conclusion at the same time, but most people use the term “valid argument” as a synonym for “good argument.” See “stipulative definition” for more information. “Jargon” can be contrasted with “ordinary language.”
judgment – (1) A belief or an attitude towards something. For example, “moral judgment” generally refers to a moral belief (e.g. that stealing is wrong) or to an attitude towards a state of affairs (e.g. disliking stealing). Philosophers argue about whether moral judgments are actually beliefs or attitudes (or both). (2) The capacity to make decisions. “Good judgment” is the ability of some people to make reasonable or virtuous decisions. (3) A decision. “He made a good judgment” means that the decision someone made was reasonable or virtuous.
justice – An ethical value concerned with fairness, equality, and rights. Theories of justice are meant to determine how we should structure society, how wealth should be distributed, and what each person deserves.
Justice as Fairness – John Rawls’s theory of justice that states that people should have the maximal set of rights including a right to certain goods, and that economic and social inequality is only justified if it benefits those who are least-well-off in the society. See “original position,” “veil of ignorance,” “primary social goods,” and the “difference principle” for more information.
justification – (1) Evidence or reasons to believe something. Observation is one of the strongest forms of justification; but self-evidence, intuition, and appeals to authority could also be legitimate forms of justification. For example, people can justify their belief that they can feel pain by having actual pain experiences. (2) The supporting premises of an argument.
justified belief – Some philosophers believe that justified beliefs are those that are given a sufficiently good justification, but it is possible that justified beliefs are defensible beliefs that one has no sufficient reason to reject. For example, a typical uncontroversial example of a justified belief is the belief that “1+1=2” but few to no people know how to properly justify this belief using argumentation.
Kant’s Categorical Imperative – Immanuel Kant’s moral theory. The first formulation of his Categorical Imperative states that people should only act when the subjective motivation for the act can be rationally universalized for all people. According to Kant, we should only act based on a subjective principle that we can will as a universal law of nature—everyone would act on the same principle. This guarantees that moral acts are not hypocritical. For example, we shouldn’t go around burning people’s houses whenever (and just because) they make us angry because we couldn’t rationally will that anyone else will be motivated to act in that way. See “categorical imperative” and “maxim” for more information.
know how – The ability to do things well, such as playing musical instruments, fighting, building ships, or healing the sick. “Know how” is often contrasted with “theoretical knowledge.”
knowledge – Classically defined as “justified true belief,” but many argue that it must be “justified in the right way” or that there might be a fourth factor. An eyewitness who sees a murderer commit the act knows who the murderer is because the belief is justified through observation and the belief is true. However, consider a situation where Sally believes that cows are on the hillside because she mistakes cardboard cutouts of cows as the real thing, and some real cows are on the hillside hiding behind some trees. The belief is justified and true, but some philosophers argue that Sally doesn’t actually know that cows are on the hillside.
laissez-faire – French for “allow to act.” It generally refers to free market capitalism with little to no government regulation of the market (other than to prevent theft and enforce contracts).
law of excluded middle – The logical principle that states that every proposition is true, or the negation is true. This implies that all tautologies are true—propositions with the form “a or not-a.” This also implies that no propositions can be true and false at the same time (i.e. contradictions are impossible). The “law of excluded middle” is similar to to the “principle of bivalence.”
law of identity – The logical principle that states that every proposition or object is identical to itself (i.e. a=a).
law of nature – (1) A constant predictable element of nature. For example, the law of gravity states that dropped objects will fall when dropped near the surface of the Earth. (2) A synonym for “natural law.”
law of non-contradiction – The logical principle that states that contradictions are impossible. It’s impossible for a statement to be true and false at the same time (i.e. propositions with the form “a and not-a” are always false).
lemma – A proven statement used to prove other statements.
letter – (1) A symbol used from an alphabet in symbolic logic. See “predicate letter” and “propositional letter” for more information. (2) A symbol used in an alphabet, such as “A, B, [and] C.” (3) A message written on a piece of paper for the purposes of communication over a distance.
lex talionis – Latin for “law of retaliation.” It’s often used to refer to the view that a punishment fits the crime if it causes the same injury as the crime, but it can also be used to refer to retributive justifications for punishment in general.
lexical definition – A dictionary definition, or the meaning of a term in “common usage.” Dictionary definitions are often vague or ambiguous because words tend to be used in many different ways by people. “Lexical definitions” can be contrasted with “stipulative definitions.”
liberalism – (1) A presumption that freedom is preferable—that liberty is generally a good, and we shouldn’t restrict people’s freedom unless we have an overriding reason to do so. Liberalism does not require a specific conception of freedom. For example, not all liberals agree that freedom requires a person to be in control of her own desires. (2) The political and ethical positions of liberals. For example, that the government can help solve social programs, and that it is sometimes just to redistribute wealth from the rich to the poor.
libertarian free will – Free will as described by incompatibilists—as being incompatible with determinism. Libertarian free will requires causation that resembles that of Aristotle’s prime mover. People need to be able to cause their actions without being caused to make those actions.
libertarianism – See “metaphysical libertarianism” or “political libertarianism.”
Liebnitz’s law – The view that there can’t be two or more different entities that have the exact same properties. For example, two seemingly identical marbles are both made of different atoms and exist at different places. Two entities that have all the same properties would both have to exist at the same place at the same time. Imagine that we find out that Clark Kent has all the same properties as Superman. For example, Clark Kent was at the precisely same place as Superman at exactly the same time. That seems like a good reason to think that Clark Kent is Superman because Clark Kent and Superman can’t have all the same properties and be two different people.
life-affirmation – To value life no matter what it consists of. Both suffering and death could be considered to be part of life, but a life-affirming attitude would require us to value life as a whole despite these considerations. Life could be considered to be valuable despite death and suffering, or death and suffering could also be considered to have value. Life-affirming morality primarily focuses on goodness and things with value; and badness is primarily understood as things lacking value rather than as having a negative value. According to Friedrich Nietzsche, “master morality” is a type of life-affirming morality. A similar concept to “life-affirmation” is that of “amor fati.”
life-denying – To see the whole of life as primarily having negative value. The negative value associated with pain, suffering, or death are seen as being more important than the positive value associated with pleasure, happiness, or life. Life-denying morality primarily focuses on evil or negative value, and goodness is primarily understood as being not evil or not harmful to people. According to Friedrich Nietzsche, “slave morality” is a type of life-denying morality. The opposite of being “life-affirming.”
literary theory – A systematic attempt to understand and interpret literature in a reasonable way.
loaded question – (1) A fallacy committed when a question implies a question-begging presumption. For example, consider the following question—“Why do liberals want to destroy families?” This question implies that liberals want to destroy families without question, but it is a controversial accusation to make against liberals, and liberals are unlikely to agree with it. Loaded questions are a version of the “begging the question” fallacy. (2) A question that implies a presumption, but is not necessarily fallacious. For example, a police officer might ask a potential shoplifter, “Why did you steal the clothes?” The shoplifter might have already admitted to stealing the clothes. In that case this question would not be fallacious. However, if the police officer does not know that the person stole the clothes, then the question could be fallacious.
loaded words – (1) A fallacy committed when words are used to imply a question-begging presumption or evoke an emotional response. For example, the words ‘weed’ or ‘job-creator’ are often used as loaded words. The word ‘weed’ could be used merely to refer to certain plants that grow quickly and disrupt the equilibrium of a habitat, but it is more often used to imply that a plant is a nuisance and should be destroyed. It would be fallacious to presume that all plants we don’t like ought to be destroyed because such plants could be good for the environment in various ways. The term ‘job-creator’ could be used to merely refer to someone who creates jobs, but it is more often used to refer to wealthy people with the presumption that wealthy people inherently create jobs by their mere existence. It would be fallacious to presume that all wealthy people create jobs merely by existing because it’s a contentious issue. (2) Words used to imply presumptions or evoke an emotional response that are not necessarily fallacious. For example, some political leaders might be truthfully be said to be tyrants. The term ‘tyrant’ is used to refer to a political leader, but it is used to imply that there is something wrong about how a political leader behaves—that the political leader abuses her power. However, it could be fallacious to call a political leader a ‘tyrant’ in order to presume she abuses her power when it’s a contentious issue.
locutionary act – A speech act with a surface meaning based on the semantics or language the act is expressed in (as opposed to the intended meaning). For example, a person can sarcastically say, “There is no corruption in the government.” The surface meaning is the literal meaning, but the statement is intended to mean the opposite (that there is corruption in the government).
logic – (1) The study of reliable and consistent reasoning. Logic is divided into “formal logic” and “informal logic.” Logic focuses on argument form, validity, consistency, argument identification, argument analysis, and informal fallacies. Logic tends to exclude controversial issues related to the nature of argumentation, justification, rationality, and knowledge. (2) The underlying form and rules to various types of communication. For example, R.M. Hare argues that there is a noncognitive logic involving imperatives. (3) In ordinary language, “logic” often refers to vague concepts involving “good ways of thinking” or “the reasoning someone uses.”
logical argument – (1) Rational persuasion. (2) A logically valid argument.
logical connective – The words used to connect propositions (or symbols that represent propositions) in formal logic. Various logical connectives are the following: “not” (¬), “and” (∧), “or” (∨), “implies” (→), and “if and only if” (↔). Logical connectives are the only words contained within propositional logic once the content is removed. For example, “Socrates is a man and he is mortal” can be translated into propositional logic as “A ∧ B.” In this case “A” stands for “Socrates is a man” and “B” stands for “Socrates is mortal.”
logical constant – Symbols used in formal logic that always mean the same thing. Logical connectives and quantifiers are examples of logical constants. For example, “∧” is a logical connective that means “and.” See “logical connective” and “quantifier” for more information. “Logical constants” can be contrasted with “predicate constants.”
logical construction – A concept that refers to something other than particular actual objects. Consider the statement “the average car bought by the average American lasts for five years.” This statement refers to “average cars” and no such car actually exists, and it refers to “average American” and no such person actually exists. Both of these concepts are logical constructions.
logical contingence – Propositions that are not determined to be true or false from the rules of formal logic alone. For example, it’s logically contingent that the laws of nature exist. Logical contingent propositions are neither tautologies nor contradictions. See “logical modality” for more information.
logical equivalence – Two sentences that mean the same thing. For example, “no dogs are lizards” is logically equivalent to “no lizards are dogs.”
logical form – The logical form of an argument consists in the truth claims devoid of content. “The sky is blue or red” has the same logical form as “the act of murder is right or wrong.” In both cases we have the form, “a or b.” (“a” and “b” are propositions.) In this case the truth claim is that one proposition is true and/or another proposition is true.
logical impossibility – The logical status of contradictions. Logically impossible statements can’t be true because of the rules of logic (i.e. because they form a contradiction). For example, it’s logically impossible for a person to exist and not exist at the same time. See “logical modality” for more information.
logical modality – The status of a proposition or series of propositions concerning the rules of formal logic—logically contingent propositions could be true or false, logically necessary propositions have to be true (are tautologies), and logically impossible propositions have to be false (because they form a contradiction). For example, it is logically contingent that the Earth exists.
logical necessity – The logical status of tautologies. Logically necessary statements must be true because of the rules of logic. For example, it’s logically necessary that the laws of nature either exist or they don’t exist. See “logical modality” for more information.
logical operator – A synonym for “logical connective.”
logical positivism – A philosophical movement away from speculation and metaphysics, and towards descriptive and conceptual philosophy. Logical positivists accept “verificationism.”
logical possibility – (1) A proposition that’s either logically contingent or logically necessary. We might say that “it’s logically possible that the Earth exists” or we might say that “it’s logically possible that the Earth either exists or it doesn’t.” (2) Sometimes “logical possibility” is a synonym for “logical modality.”
logical structure – A synonym for “logical form.”
logical system – A system with axioms and rules of inference that can be applied to statements in order to determine if propositions are consistent, tautological, or contradictory. Additionally, logical systems are used to determine if arguments are logically valid. See “formal logic,” “axioms,” and “rules of inference” for more information.
logical truth – See “tautology.”
logically valid – See “valid.”
logicism – The view that mathematics is reducible to logic. If logicism is true, we could derive all true mathematical statements from true statements of logic.
logos – Greek for “word” or “language.” It is often used to refer to logical argumentation.
main connective – A logical connective that’s inside the least amount of parentheses when put into a formal language. For example, consider the statement “all dogs are mammals or reptiles, and all dogs are animals.” This statement has the propositional form “(A or B) and C.” In this case “and” is the main connective. See “formal logic,” “logical connective,” and “grouping” for more information.
major premise – The premise of a categorical syllogism containing the “major term” (the second term found in the conclusion). For example, consider the following categorical syllogism—“All dogs are mammals. All mammals are animals. Therefore, all dogs are animals.” In this case the major premise is “all mammals are animals.”
major term – The second term in the conclusion of a categorical syllogism. If the conclusion is “all dogs are mammals,” then the major term is “mammals.”
mandatory – A synonym for “obligatory.”
master morality – A life-affirming type of moral system primarily focused on goodness, which is primarily understood as superiority, excellence, greatness, strength, and power. Good or superior things are contrasted with “bad things,” which are seen to be inferior, mediocre, and weak. “Master morality” is often contrasted with “slave morality.”
master table – A truth table that defines all logical connectives used by a logical system by stating every combination of truth values, and the truth value of propositions that use the logical connectives. For example, the logical connective “a ∧ b” means “a and b,” so it’s true if and only if both a and b are true. (“Hypatia is a mammal and a person” is true because she is both a mammal and a person. See “logical connective” for more information.) An example of a master table for propositional logic is the following:
material cause – The stuff a thing is made out of. For example, the material cause of a stone statue is the stone it is made out of.
material conditional – A proposition that states that one thing is true if something else is true. It has the logical form “If a, then b.” A material conditional can also be expressed as “b if a.” There are two common symbols used for the material conditional in formal logic: “⊂” and “→.” An example of a statement using one of these symbols is “A → B.”
material equivalence – A proposition that states that one thing is true if and only if something else is true. Either both propositions are true or both are false. The logical form of a material equivalence is “a if and only if b.” Material equivalence can also be expressed as “a-and-b or not-a-and-b” or “if a, then b; and if b, then a.” There are two common symbols used in formal logic for the material equivalence: “” and “↔.” An example of a statement using one of these symbols is “A ↔ B.”
material implication – A synonym for “material conditional.”
materialism – The view that ultimately only matter and energy exists—that there is only one kind of stuff, and everything is causally connected to particles and energy. “Materialism” is often taken as a synonym for “physicalism.”
mathematical anti-realism – The view that there are no mathematical facts. For example, what we take to be “true mathematical statements” could be based on a social construction or convention.
mathematical intuitionism – The view that mathematics is a construct of our mind and mathematicians are creating the same types of thoughts in each others’ minds.
mathematical platonism – The view that there is at least one mathematical fact and that there are abstract mathematical entities. For example, numbers can be abstract entities.
mathematical realism – The view that there is at least one mathematical fact that is not dependent on a social construction or convention. Many mathematical realists believe that numbers are real (exist as abstract entities) and that it is impossible for the universe to violate mathematical truths.
matters of fact – Empirical statements concerning the physical world. They can be known to be true or false from observation. For example, “all dogs are mammals” is a matter of fact. David Hume believed the only propositions that could be justified were “matters of fact” and “relations of ideas.”
maxim – A subjective motivational justification. For example, Lilith punches an enemy who makes her angry might simultaneously assume the action is justified by assuming that anger can justify acts of violence towards others. Immanual Kant used this concept of a maxim for his moral theory (Kant’s Categorical Imperative)—he believed that people should act on a maxim, and that our maxim must be one we can rationally will everyone also has. In that case Lilith shouldn’t punch people based on her anger because she probably can’t rationally will that everyone else do the same.
maximally complete – See “syntactic completeness.”
maximin rule – When deciding on what system to use, the maximin rule requires that we choose the system that has the least-bad possible outcome. It could be described as a risk-adverse rule because some people might want to take a chance at being more wealthy, even if they also have a chance of being more poor.
maximize expected utility – (1) The view that states that a person ought to make decisions based on whatever will probably lead to the greatest utility (the most valued or desired state). See “utility theory” and “stochastic dominance” for more information. (2) To make a decision that will probably lead to the most preferable outcome considering all possible outcomes of all possible decisions.
meaning – (1) The value, importance, or worth of something. For example, the meaning of life could be to make people happier. (2) The definition or semantics of terms, sentences, or symbols. For example, the meaning of water is “the stuff we drink to hydrate our bodies made of H2O.”
meaning of life – What we should do with our life and what “really matters.” If something really matters, then we might have reason to promote it. For example, happiness seems to really matter. If happiness is worthy of being a meaning of life, then we should try to make people happy. Some philosophers believe that the meaning of life is related to what has “intrinsic value.”
means of production – Machines and natural resources used for production of goods. For example, oil and oil refineries.
means to an end – The method, tools, or process used to accomplish a goal. Sometimes people are said to be inappropriately treated as “means to ends” rather than valued or respected (an “end in themselves”). “Means to an end” is often contrasted with “end in itself.”
meme – An idea or practice that has certain qualities that cause it to be spread among several people. For example, religions are said to be memes. Memes are thought to undergo something like natural selection and the most successful memes could be said to survive for being the fittest. The fittest memes tend to have qualities that arouse people’s interest to spread the idea or practice to others, but they need not be beneficial to people.
metalogic – The study of logical systems. Logic is concerned with using logical systems to determine validity, and metalogic is concerned with determining the properties of entire logical systems. For example, a logical system can be “expressively complete.”
mental – See “psychological.”
mentalism – (1) In epistemology, mentalism is the view that justifications for beliefs must be some mental state of the person who has the belief. For example, justifications could take the form of propositions that are understood by a person. We could justify that the Sun will probably rise tomorrow by knowing that “the Sun has risen every day of human history; and if the Sun has risen every day of human history, then the Sun will probably rise tomorrow.” (2) In philosophy of mind, mentalism is the view that the mind is capable of interacting with the body and can cause the body to move in various ways. For example, a person who decides to raise her arm could raise her arm as a result.
merology – The philosophical study of parts and wholes. Merology concerns what the parts are of various things and how various parts and wholes relate. One merological question is whether there are atoms (smallest indivisible parts) of all objects, or whether all objects are ultimately gunky (can be split into smaller pieces indefinitely). Another merological question is whether or not an object is the same object if we replace all of its parts with functionally equivalent parts, such as if we replaced all the parts of a pirate ship with new but nearly identical parts.
meronomy – A type of hierarchy dealing with part-whole relationships. For example, protons are parts of molecules.
metaethical constructivism – The view that moral right and wrong are determined by what ideally rational agents would agree with (if they deliberated in an ideal fashion). This can be based on a “social contract theory”—we should accept the moral rules that would be provided by a social contract if it’s what rational people would endorse in ideal conditions.
metaethical intuitionism – The view that moral facts are not identical to nonmoral states and that we can know about moral facts through intuition. Moral intuitionists typically think that observation is insufficient to attain moral knowledge, so the intuition involved is a nonempirical form of intuition. Some philosophers object to moral intutionism because they don’t think intuition is a reliable form of justification. See “intuition” for more inforamation.
metaethics – Philosophical inquiry involving ethical concepts, the potential moral reality behind ethical concepts, how we can know anything about ethics (i.e. moral epistemology), and moral psychology. Metaethical questions include: “Is anything good?” and “What does ‘good’ mean?”
metalangauge – Language or symbols used to discuss language. Formal logical systems are metalanguages. See “formal logic” for more information.
metalinguistic variable – A synonym for “metavariable.”
metaphilosophy – Systematic examination and speculation concerning the nature of philosophy, and what philosophy ought to be. For example, Pierre Hadot argues that the term ‘philosophy’ ought to refer to a way of life involving an attempt to become more wise and virtuous rather than as expertise related to argumentation regarding various topics traditionally debated by philosophers.
metaphysical contingence – What might or might not exist concerning reality itself assuming that the laws of nature could have been different. Metaphysical contingence can be said to refer to “what is true in some possible worlds, but not others.” For example, the existence of water is plausibly metaphysically contingent—water might not exist if the laws of physics were different. See “metaphysical modality” for more information.
metaphysical impossibility – What can’t exist concerning reality itself assuming that the laws of nature could have been different. Metaphysically impossible statements refer to “what is not true in any possible world.” For example, it would be plausible that it’s metaphysically impossible for a person to exist and not exist at the same time. See “metaphysical modality” for more information.
metaphysical libertarianism – The view that we have free will and that free will is incompatible with determinism. Libertarianism requires that free will to be something like Aristotle’s notion of a first cause or prime mover. The free decisions people make can cause things to happen, but nothing can cause our decisions.
metaphysical modality – A range of modal categories concerning reality as it exists assuming that
the laws of nature could have been different. The range includes metaphysical contingence, possibility, necessity, and impossibility. Metaphysical modality can be described as the status
of a statement or series of statements considering all possible worlds—A statement is metaphysically
contingent if it’s true in some possible worlds and false in others, possible if is true in some possible
worlds, metaphysically necessary if it is true in all possible worlds, and metaphysically impossible if
it’s false in all possible worlds. For example, some philosophers argue that “water is H2O” is a
metaphysically necessary statement. Assuming they are right, if we found a world with something
exactly like water (tastes the same, boils at the same temperature, and nourishes the body) but it is
made of some other chemical, then it would not really be water.
metaphysical naturalism – (1) The view that only natural stuff exists, which is a type of “physicalism.” Natural stuff is often taken to be stuff found in the physical world and natural facts are often assumed to be nonmoral and non-psychological. Naturalists reject the existence of non-natural facts (perhaps mathematical facts) as well as supernatural facts (perhaps facts related to gods or ghosts). (2) The view that the only stuff that exists is stuff described by science. Not all philosophers agree that the reality described by science is merely physical reality.
metaphysical necessity – What must be true or exist concerning reality itself assuming that the laws of nature could have been different. Metaphysically necessary statements refer to “what is true in every possible world.” For example, it is plausible that tautologies are metaphysically necessary and are true in every possible world. See “metaphysical modality” for more information.
metaphysical possibility – (1) The status of a statement being metaphysically possible (non-impossible) as opposed to a range of modal categories. This status of possibility refers to what could be contingently true or necessarily true about reality assuming that the laws of nature could have been different. A statement is metaphysically possible if it is “true in at least one possible world.” For example, it is metaphysically possible that the H2O exists because there is at least one possible world where it exists—the one we exist in. (2) Sometimes “metaphysical possibility” is used as a synonym for “metaphysical modality.”
metaphysics – Philosophical study of reality. For example, some people think that reality as it’s described by physicists is ultimately the only real part of the universe.
metavariables – A symbol or variable that represents something within another language. For example, a logical system could have various either/or statements. “A or B” and “A and B, or C” are two different either/or statements within a logical system. We could then use metavariables to talk about all either/or statements that could be stated within the logical system. For example, “a or b” would represent all either/or statements of our logical language assuming that the lower-case letters are metavariables.
methodological naturalism – See “epistemic naturalism.”
middle term – The term of a categorical syllogism that doesn’t appear in the conclusion, but it appears in both premises. For example, consider the categorical syllogism, “All dogs are mammals; all mammals are animals; therefore, all dogs are animals.” In this case “mammals” is the middle term because it’s not in the conclusion, but it appears in both premises.
mind – The part of a being that has thoughts, qualia, semantics, and intentionality. The mind might not be an object in and of itself, but merely refer to the psychological activity within a being. The mind is often contrasted with the body, but some philosophers argue that the mind could be part of certain living bodies. For example, some philosophers believe that psychological activity could be identical to certain kinds of brain activity.
mind-body dualism – See “dualism.”
mind-body problem – The difficulty of knowing how the body and mind interact. Psychological states seem quite different from physical states, so philosophers speculate about how they both relate. Some philosophers argue that the mind can’t cause the body to do anything at all. Philosophers often think that the mind-body problem is a good reason to reject substance dualism insofar as it seems to imply that the mind and body can’t interact (insofar as they would then be totally different kinds of stuff). One solution to the mind-body problem is “emergentism.”
mind dependent – Something that can only exist if a mind exists (or if psychological phenomena exists). For example, money wouldn’t exist if no psychological phenomena exists.
minor premise – The premise of a categorical syllogism that contains the minor term (the first term found in the conclusion. For example, consider the following categorical syllogism—“All dogs are mammals. All mammals are animals. Therefore, all dogs are animals.” In this case the minor premise is “all dogs are mammals.”
minor term – The first term of the conclusion of a categorical syllogism. For example, if the conclusion is “all dogs are mammals,” then the minor term is “dogs.”
missing conclusion – A synonym for “unstated conclusion.”
missing premise – A synonym for “unstated premise.”
modal antirealism – The view that there are no modal facts—facts concerning necessity and possibility. For example, a modal antirealist would say that it’s not a fact that it’s possible for a person to jump over a small rock.
modal logic – Logic that uses modal quantifiers or quantifiers of some other non-classical type. For example, deontic quantifiers are sometimes used in modal logic. See “modal quantifier” for more information.
modal realism – The view that there are modal facts—facts concerning necessity and possibility. For example, it seems like a fact that it’s possible for a person to jump over a small rock; and it seems like a fact that it’s necessary that contradictions don’t exist. See “modality,” “concretism,” and “abstractism” for more information.
modal quantifier – Modal quantifiers allow us to state when a proposition is possible or necessary. The two main symbols are “□” for necessary and “◊” for possible. For example, “□p” would mean that p is necessary. (“p” is a proposition). “□p” could refer to the proposition, “Necessarily, dogs are mammals.”
modality – Concerning quantification (modal quantifiers), such as necessity and possibility. See “metaphysical modality,” “physical modality,” or “logical modality” for more information.
mode – (1) A nonessential property of a substance. For example, “spherical.” See “substance” for more information. (2) A form of something. For example, a stone statue can have the form of a human being. (3) A way of doing something. For example, traveling by car is a mode of transportation.
modus ponens – Latin for “the way that affirms by affirming.” It is used to refer to the following valid logical form—“If a, then b; a; therefore, b.” An argument with this form is “If dogs are mammals, then dogs are animals. Dogs are mammals. Therefore, dogs are animals.”
modus tollens – Latin for “the way that denies by denying.” It is used to refer to the valid logical form—“If a, then b; not-b; therefore, not a.” An argument with this form is “If dogs are lizards, then dogs are reptiles. Dogs are not reptiles. Therefore, dogs are not lizards.”
monad – (1) Literally means a “unit.” (2) According to the Pythagoreans, the “Monad” is the divine or the first thing to exist. (3) According to Platonists, the “Monad” is another name for “the Good.” (4) According to Gottfried Wilhelm Leibniz, monads are elementary particles (the building blocks of physical reality) that have no material existence of their own, and move according to an internal principle rather than from a physical interaction or external forces.
monadic predicate – A predicate that only applies to one thing. For example, “x is mortal” could be stated as “Mx.” (“M” stands for “is mortal,” and “x” stands for anything.)
monadic predicate logic – A system of predicate logic that can express monadic predicates, but can’t express polyadic predicates. See “monadic predicate” and “predicate logic” for more information.
monarchy – A political system defined by the supreme rule of a king or queen.
monism – (1) The metaphysical position that reality ultimately derives into one kind of thing. For example, materialists think that physical reality is the ultimate reality; and some idealists think that the mind is the ultimate reality. “Monism” is often contrasted with “dualism.” (2) A view that only one thing is ultimately relevant to a subject or issue.
monopoly – Exclusive power or control over something. For example, the government has a monopoly over violence and no one is generally allowed to use violence without government approval.
monotheism – The view that one god exists. “Monotheism” can be contrasted with “polytheism.”
Monte Carlo fallacy – A synonym for “gambler’s fallacy.”
moral absolutism – (1) The view that morally right and wrong acts do not depend on context. Something is always right or always wrong no matter what situation people are in. (2) In ordinary language, “moral absolutism” often refers to something similar to “moral realism” (as opposed to “moral relativism.”)
moral anti-realism – The rejection of moral realism. The belief that intrinsic values don’t exist, and that moral facts don’t exist. Some moral anti-realists think that there are moral truths, but such truths would not be based on facts about the world. Instead, they could be based on a social contract or convention.
moral atomism – See “moral generalism.”
moral constructivism – The view that moral truths consist in psychological facts, agreements, or some kind of an ideal based upon one or both of them. See “constructivism” or “meta-ethical consructivism” for more information.
moral epistemology – The systematic study of moral knowledge, rationality, and justification. For example, some philosophers argue that we can know if an action is right or wrong by considering intuitive or axiomatic moral principles.
moral externalism – See “motivational externalism.”
moral generalism – The view that there are abstract moral criteria (rules, duties, or values) that can be applied in every relevant situation to determine what we ought to do. Moral generalists often believe that analogies can be used to discover what makes an action right or wrong. For example, kicking and punching are both analogous insofar as we could use either to try to hurt people, and they both tend to be wrong insofar as hurting people is bad. “Moral generalism” is often contrasted with “moral particularism.”
moral holism – A synonym for “moral particularism.”
moral internalism – A synonym for “motivational internalism.”
moral intuitionism – See “meta-ethical intuitionism” or “Ross’s intuitionism.”
moral naturalism – The moral realist meta-ethical view that there are moral facts that are either identical to or emergent from nonmoral facts of some kind. For example, actions could be wrong insofar as they cause states of affairs with greater suffering and less happiness than the alternatives. Some philosophers reject moral naturalism based of the fact that we can easily question any proposed identity relation. Actions that we consider wrong are not necessarily those that cause more suffering and less happiness than the alternatives, and some people don’t think that’s all it means to say that an action is wrong.
moral objectivism – (1) The view that there are moral facts that are mind-independent. Moral objectivism excludes views of moral facts that depend on subjective states or conventions. This form of moral objectivism requires a rejection of “moral subjectivism” and “constructivism.” (2) A synonym for “moral realism.” (3) The view that there are true moral statements that are not true merely due to a convention or subjective state.
moral particularism – The view that there are no abstract moral criteria (rules, duties, or values) that can be applied in every relevant situation to determine what we ought to do. Instead, what we ought to do depends on the circumstance we are in without being determined by such things. Moral particularists sometimes agree that rules of thumb and analogies can be useful, but they don’t think we can discover rational criteria that determines what we ought to do in every situation. For example, kicking and punching are both analogous insofar as we could do either to try to hurt people, but the particularist will argue that it could be morally right to try to hurt people in some situations. Ross’s intuitionism is a plausible example of moral particularism. “Moral particularism” is often contrasted with “moral generalism.”
moral psychology – The philosophical study concerning the intersection between ethics and psychology, and primarily concerned with moral motivation. For example, some philosophers have argued that sympathy or empathy is needed to be consistently motivated to do the right thing.
moral rationalization – Arguments used in an attempt to justify, excuse, or downplay the importance of immoral behavior. Moral rationalizations may superficially appear to be genuinely good arguments, but they fail on close examination. For example, many people deny that they are responsible for the harm they cause when they were one person out of many who were needed to cause harm, such as certain corporate employees. They are likely to say they are like a “cog in a machine” or “just doing my job.” See “rationalization” for more information.
moral realism – The belief that moral facts exist, and that true moral propositions are true because of moral facts—not merely true because of a social contract, convention, popular opinion, or agreement. Many moral realists believe that intrinsic values exist. A moral realist could say, “Murder is wrong because human life has intrinsic value, not merely because you believe that it’s wrong.” Some philosophers argue that moral realism requires a rejection of “constructivism” and “subjectivism,” but that is a contentious issue.
moral responsibility – See “responsibility.”
moral relativism – See “cultural relativism.”
moral sense theory – See “moral sentimentalism.”
moral sentimentalism – A philosophical position that takes reasoning to be less important for moral judgment than our emotions, empathy, or sympathy. Moral sentimentalists tend to think that morality somehow concerns our emotions rather than facts.
moral theories – A synonym for “normative theories of ethics.”
moral worth – (1) The degree an action is morally praiseworthy or blameworthy. For example, a morally responsible person who commits murder has done something “morally blameworthy.” (2) According to Immanuel Kant, an action has moral worth (or perhaps moral relevance) when it is caused by a rational motivation that is guided by ethical principles.
morality – The field concerning values, right and wrong actions, virtue, and what we ought to do.
morally right – (1) Behavior that’s consistent with moral requirements. For example, it could be considered to be morally right to refuse to attack people who make us angry. What is “morally right” is often contrasted with what’s “morally wrong.” (2) Preferable moral behavior. For example, to give to charity.
morally wrong – Immoral. Behavior that’s inconsistent with moral requirements. For example, it is morally wrong to kill people just because they make you angry. What’s “morally wrong” is often contrasted with what’s “morally right.”
motivational externalism – The view that moral judgments are not intrinsically motivating. A person could think something is wrong but still be motivated to do that thing, even when in a relevant situation. For example, a sociopath might believe that harming other people is wrong but have no motivation against harming others. The opposite of “motivational externalism” is “motivational internalism.”
motivational internalism – The view that moral judgments are intrinsically motivating. A person can’t think something is right without having at least some motivation for doing that thing (when in a relevant situation). For example, we are likely to doubt the sincerity of a person who says that stealing is wrong but absolutely loves stealing and feels no motivation to refuse to steal. The opposite of “motivational internalism” is “motivational externalism.”
multiple realizability – When more than one state constitutes or brings about another state. For example, psychological states seem like they are multiply realizable—two different brain states can correlate with the same psychological state for different people. Perhaps both a sophisticated machine and a human brain could have the same psychological states.
Münchhausen Trilemma – A philosophical problem that presents three possible types of reasoning we could use to justify beliefs: (a) circular reasoning (beliefs must justify one another), (b) regressive reasoning (beliefs must all be justified by other arguments on and on forever) or (c) axiomatic reasoning (some beliefs are self-evident). It is often thought that knowledge consists of justified true beliefs that must be justified by an argument or axiom. The problem is that all three of the possible ways to justify beliefs that constitute knowledge seem to have problems. Circular arguments are fallacious, regressive reasoning can never be completed by people (who are finite beings), and what we think of as axioms can always be questioned and are often proven to be false at some point.
NAND – A synonym for the “Sheffer stroke.”
natural deduction – A method used to prove deductive argument forms to be valid. Natural deduction uses rules of inference and rules of equivalence. For example, consider the argument form “A and (B and C). Therefore, A.” (“A,” “B,” and “C” are three specific propositions.) The rule of implication known as “simplification” says we can take a premise with the form “a and b” to conclude “a.” (“a” and “b” stand for any two propositions.) We can use this rule to use “A and (B and C)” as a premise to conclude “A.” Therefore, that argument is logically valid.
natural language – Language as it is spoken. Natural language includes both specialized language used by experts and ordinary language. “Natural language” can be contrasted with “formal languages.”
natural law – (1) A theory of ethics that states that moral standards are determined by facts of nature. Consider the following two examples: One, the fact that human beings need food to live could determine that it’s wrong to prevent people from eating food. Two, the fact that people have a natural desire to care for one another could be a good reason for them to do so. (2) A theory of ethics that states that there are objective moral standards. See “moral objectivism.” (3) A theory of law that states that laws should be created because of moral considerations. For example, murder should be illegal because it’s immoral.
natural theology – The systematic study of gods using secular philosophical argumentation. For example, the argument for God’s existence that states that the universe must have a first cause is part of natural theology. “Natural theology” is often contrasted with “revealed theology.”
naturalism: – See “epistemic naturalism” or “metaphysical naturalism.”
naturalistic fallacy – (1) A fallacious form of argument that assumes that the fact that something is the case means it ought to be the case. For example, to argue that people should be selfish because they are selfish. (2) A fallacious form of argument that concludes that goodness is identical with some natural property or state of affairs just because goodness is always accompanied by the natural property or state of affairs. For example, to argue that pleasure and goodness are identical just based on the belief that pleasure always accompanies goodness. Some philosophers—such as moral identity theorists—argue that this type of argument isn’t necessarily fallacious.
necessary condition – Something that must be true for something else to be true is a necessary condition. For example, a necessary condition of being a dog is being a mammal. “Necessary conditions” can be contrasted with “sufficient conditions.”
necessary truth – Statements that have to be true no matter what states of affairs there are or could be. For example, “1+1=2” is a necessary truth. See “physical necessity,” “metaphysical necessity,” and “logical necessity” for more information.
necessity – The property of being unable to be any other way. See “physical necessity,” “metaphysical necessity,” and “logical necessity” for more information.
negation – A false proposition or what we say is not the case. The logical form of a negation is “not-a.” For example, “not all peple are scientists” is the negation of “all people are scientists.”
negative argument – See “objection.”
negative categorical proposition – A categorical proposition that has the form “not all a are b” or “some a are not b.” For example, “some mammals are not dogs.”
negative conclusion – A categorical proposition used as a conclusion with the form “no a are b” or “some a are not b.” For example, “no dogs are reptiles.”
negative liberty – Freedom from constraints. To be in chains or imprisoned would be to lack negative liberty. Sometimes negative liberty is related to specific types of freedom. For example, we have the negative freedom to live insofar as others are not allowed to kill us. “Negative liberty” is often contrasted with “positive liberty.”
negative premise – A categorical proposition used as a premise with the form “no a are b” or “some a are not b.” For example, “some animals are not mammals.”
negative rights – Rights to be left alone. For example, freedom of speech is a negative right that means that no one can stop you from saying things (within the bounds of reason). Negative rights can be contrasted with “positive rights.”
neutral monism – The view that reality is ultimately neither mental nor physical although there could be mental and physical properties.
naive realism – The view that we perceive reality as it exists. See “realism” and “thing in itself” for more information.
nihilism – (1) The view that intrinsic values don’t exist or that moral facts don’t exist. See “moral anti-realism.” (2) A position that denies the existence of something. For example, an epistemic nihilists would deny that there are epistemic facts—that there are facts related to being reasonable, to having justifications, or to having knowledge (other than simply what is true by convention). (3) A synonym for “error theory.”
no true Scotsman – A fallacy committed when someone stacks the deck by defining terms in a convenient way in order to win an argument. It’s often used to try to win an argument by definition. For example, a person could say that all religious people are irrational, and we might then mention a religious person who is not irrational (perhaps Marsha). Someone could then claim that Marsha isn’t really religious because religious people are irrational by definition.
noëtic structure – Everything a person believes and the relationship between all of her beliefs. Also, noëtic structure involves how confident a person is that various statements could be true and the strength in which each belief influences other beliefs. For example, finding out that there is no external reality would have a dramatic effect on our noëtic structure insofar as we are very confident that an external reality exists and many of our beliefs depend on that belief. Perhaps hurting “other human beings” would no longer be immoral insofar as they don’t really exist anyway. See “worldview” for more information.
nominalism – (1) The view that there are no universals and only particulars exist. That names of various kinds of entities exist in name only out of convenience and our understanding of those things are based on generalizations or abstraction. See “universal” for more information. (2) The view that Platonic Forms don’t exist.
non causa pro causa – Latin for “non-cause for cause” and also known as the “false cause” fallacy. This is a fallacy that is committed by arguments that conclude that a cause exists when the premises don’t sufficiently justify the conclusion. See “cum hoc ergo propter hoc,” “post hoc ergo propter hoc,” and “hasty generalization” for more information.
non-compound proposition – A sentence that can’t be broken into two or more propositions. For example, “Socrates is a man.” “Non-compound propositions” can be contrasted to “compound propositions.”
non-compound sentence – See “non-compound proposition.”
non-discursive concept – According to Immanuel Kant, it’s a concept known from “pure intuition” (known a priori without depending on experience). For example, space and time. According to Kant, we couldn’t even have experiences of the world without already interpreting our experiences in terms of space and time. “Non-discursive concepts” can be contrasted with “discursive concepts.”
non-discursive reasoning – See “non-inferential reasoning.”
non-inferential reasoning – Intuitive or contemplative reasoning that does not involve argumentation (conclusions derived from premises). Non-inferential reasoning can require contemplation in order to discover what beliefs are self-evident. For example, Aristotle believes that we can know the axioms of logic through non-inferential reasoning. “Non-inferential reasoning” is often contrasted with “inferential reasoning.”
non sequitur – Latin for “it does not follow.” (1) A statement that is made that’s not related to the preceding conversation. (2) A logically invalid argument, i.e. the conclusion doesn’t follow from the premises.
noncognitivism – (1) The view that some domain lacks true and false judgments. The rejection of cognitivism. For example, epistemological non-cognitivism is the view that judgments concerning rationality, justification, and knowledge are neither true nor false. For example, the judgment that we know that there are laws of nature might merely express our approval of such a belief. (2) Metaethical non-cognitivism is the anti-realist view that states that moral judgments are neither true nor false. For example, emotivists believe that moral judgments are expressions of our emotions. Saying, “stealing is wrong,” might be expressing one’s frustration concerning stealing without saying it is literally true.
nonfactual truth – Statements that are true or false, but are not meant to refer to reality or facts. For example, it is true that unicorns are mammals and that Sherlock Holmes is a detective who lives at 221B Baker Street, but it’s only true within a fictional domain—it’s not true about factual reality. It could be argued that “all bachelors are unmarried” by definition, and such a truth could also be nonfactual. Moreover, the existence of money could also be nonfactual insofar as it depends on our attitudes and customs rather than to facts that directly relate to reality.
nonmoral – Something that is neither morally right nor morally wrong. For example, mathematics is nonmoral, and a person who scratches an itch is acting nonmorally. “Nonmoral” can be contrasted with “amoral.”
nonrational evidence – (1) Evidence that is not related to induction or deductive reasoning. For example, intuitive evidence or self-evidence. See “non-inferential reasoning” for more information.
nonrational persuasion – Fallacious and manipulative forms of persuasion. Nonrational persuasion does not always take the form of an argument, and it often appeals to our biases. For example, the news could continually have stories about how our enemies harm innocent people to give us the impression that our enemies are evil. This is similar to the “one-sidedness fallacy,” but no actual argument needs to be presented. People are likely to jump to conclusions on their own.
NOR – A synonym for the “Pierce stroke.”
norm – A principle, imperative, standard, or prescription concerning preferable or required behavior.
normative – A category that is primarily concerned with standards, ideals, or guiding principles. Normative constraints are often thought to be action-guiding or motivational. “Normative” is often equated with “prescriptive.”
normative theories of ethics – Moral theories that tell us how we can determine the difference between right and wrong actions, determine what we ought to do, or determine what we ought to be. Normative theories of ethics are also concerned with ideals, values, and virtues. Normative theories of ethics are central to “applied ethics.”
normalization – Values and behavior are normalized when they become stable within a group of people, generally by excluding the alternatives. Normalization is likely to occur when the interests of various people converge and the values and behavior in question are mutually beneficial for the people. However, normalization can harm other people (especially a minority) that does not mutually benefit along with the others. For example, a minority could be used as a servant class because it benefits the majority, but it would give the minorities a disadvantage insofar as it limits their opportunities.
noumenal world – The world as it really exists in and of itself. Our understanding of reality is often thought to be corrupted by flawed interpretation and perception. The “noumenal world” can be contrasted with the “phenomenal world.”
noumenon – An object or reality that exists separately from experience and can’t be known through the senses. “Plato’s Forms” are a possible example of noumenon.
nous – (1) Greek for “common sense, understanding, or intellect.” It refers to our capacity to reason. (2) According to Neoplatonists, “Nous” is the mind or intellect of “the Good.”
O-type proposition – A proposition with the form “some a are not-b.” For example, “some cats are not female.”
objective morality – See “moral objectivism.”
obligation – A requirement of rationality, ethics, or some other normative domain. A plausible example is that we are obligated not to kill other people just because they make us angry. See “duty” for more information.
obligatory – Beliefs that are rationally required, actions that are morally required, or a requirement of some other normative domain. “Obligatory” requirements are often contrasted with the “supererogatory” and “permissible” categories.
objection – An argument that opposes a belief or another argument. They’re meant to give us a reason to disagree with the belief or argument. For example, we could object to the belief that it’s okay to kill others who make us angry by saying, “You don’t want others to kill you just because you make them angry, so you shouldn’t kill them just because they make you angry either.” “Objections” are often contrasted with “positive arguments.”
objective certainty – A synonym for “epistemic certainty.”
objective ought – What a person should do based on few (or no) constraints on the person’s knowledge. What we objectively ought to do is often thought to be based on the actual effects our behavior has. For example, utilitarians often say that we ought to do whatever maximizes happiness, even if we have no idea what that is. A person might try to help others by sharing food and accidentally give others food poisoning, and utilitarians might say that the person objectively ought not to have done so, even though the person might have done what was likely to help others from her point of view. “Objective ought” can be contrasted with “subjective ought.”
objective reason – A synonym for “agent-neutral reason.”
objective right and wrong – What is right or wrong considering few (or no) constraints of a person’s knowledge. What is considered to be objectively right or wrong is often thought to be based on the actual effects our behavior has. For example, if you win the lottery, then there’s a sense that buying a lottery ticket was the “objectively right” thing to do, even though you had no reason to expect to win. “Objective right and wrong” can be contrasted with “subjective right and wrong.”
objectivity – See “ontological objectivity” or “epistemic objectivity.”
obverse – A categorical proposition is the obverse of another categorical proposition when it has a certain different quantification and a negated second term. There are four different forms of obversion: (a) The obverse of “all a are b” is “no a are non-b.” (b) The obverse of “no a are b” is “all a are non-b.” (c) The obverse of “some a are b” is “some a are not non-b.” (d) The obverse of “some a are not b” is “some a are non-b.” It is always valid to infer the obverse of a categorical propositions because the two propositions mean the same thing.
obversion – To infer the obverse of a categorical proposition. See “obverse” for more information.
Occam’s razor – The view that we shouldn’t accept an otherwise equally good explanation if it is more complicated, often stated as “we shouldn’t multiply entities beyond necessity.” Occam’s razor could be taken to be a reason to believe an explanation that’s simpler than the alternatives, but it is not an overriding reason to believe an explanation. For example, sometimes ghosts might be an explanation for why objects move around in a house, but Occam’s razor might be a good reason for us to reject the existence of ghosts anyway.
oligarchy – A political system where the rulers are wealthy people.
omnibenevolent – All-good.
omnipotent – All-powerful.
omnipresent – Existing everywhere.
omniscient – All-knowing.
offensive – A synonym for “suberogation.”
omissible – Not obligatory, but permissible. For example, jumping up and down is generally considered to be permissible and non-obligatory. “Omissible” is not a synonym of “permissible” because all obligatory actions are also taken to be permissible. “Omissible” can be contrasted with “obligatory” and “permissible.”
The One – A Neoplatonist term for “the Good.”
one-sidedness – (1) A fallacy committed by an argument that presents reasons to believe something while ignoring or marginalizing the reasons against believing it. For example, a person selling a vacuum cleaner could tell us how it can pick metal objects off the floor, but omit mentioning that it tends to break after being used a few times. “One-sidedness” is also known as “selective evidence” and highly related to “cherry picking” and “quoting out of context.” (2) To be incapable or unwilling to see things from more than one reasonable point of view.
ontological naturalism – See “metaphysical naturalism.”
ontological objectivity – refers to non-mental existence (and minds), but not what exists as part of the mind (e.g. thoughts and feelings). In this sense rocks are objective, but pain is not. “Ontological objectivity” can be contrasted with “ontological subjectivity.”
ontological randomness – When something happens that could not possibly be reliably predicted because it could have happened otherwise. If anything ontologically random happens, then determinism is false—there are events that occur that are not sufficiently caused to happen due to the laws of nature and state of affairs. Ontological randomness can be contrasted with “determined” events and the acts of “free will.” It is generally thought that acts of free will are not random (and perhaps they’re not determined either). Imagine that you time travel to the past without changing anything, and all people make the same decisions, but a different person won the lottery as a result. That would indicate that there are elements of randomness that effect reality. “Ontological randomness” can be contrasted with “epistemic randomness.”
ontological subjectivity – Mental existence—anything that exists as part of our mind, such as thoughts and feelings. “Ontological subjectivity” can be contrasted with “ontological objectivity.”
ontology – The study of “being” as such—what is the case or the ultimate part of reality. It’s sometimes used to be a synonym for “metaphysics.”
operands – The input involved with an operation or predicate. For example, “Gxy” is a statement of predicate logic with two operands—two things being predicated. “G” can stand for “jumps over.” In that case “Gxy” means “x jumps over y” and “x” and “y” are each an operands. See “predicate logic” and “operation” for more information.
operation – Something with variables, input, and output. For example, addition is a function with two different numbers as input, and another number that’s the output. You can input 1 and 3 and the output is 4. (1+3=4.) Statements of predicate logic are also said to involve an operation insofar as predicates are taken to be operations. For example, “Fj” is a statement of predicate logic that could also be taken to be an operation. In this case “F” could stand for “is intelligent” and “j” can stand for “Jennifer.” In that case the input is “j” and the output is “Jennifer is intelligent.” See “predicate logic” and “operands” for more information.
ordinary language – Language as it is used by people in everyday life. Words in ordinary language are generally defined in terms of “common usage” (i.e. how people tend to use the word). “Ordinary language” can be contrasted with “formal language” and “jargon.”
original position – A situation within John Rawls’s theory of justice that sets ideal conditions for deliberation concerning the production of a social contract. Rawls argues that rational principles for justice are decided within the original position under a “veil of ignorance.” He argues that a result of the people’s risk-aversion would be the adoption of the “maximin rule.” (Not to be confused with a maximum rule.)
ostensible meaning – The surface meaning of a speech act without any involvement of “mind reading” or psychological understanding. For example, someone might say, “I love chocolate” in a sarcastic tone. The ostensible meaning is stated, but the intended meaning is the opposite.
ought – Equivalent with “should.” What ought to exist is what should exist, what ought to be done is what should be done. It’s better to do what ought to be done. Many philosophers argue that if something has intrinsic value, then we ought to promote that value. For example, we ought to give to charity when it will help people who would otherwise suffer. What ought to be the case is often contrasted with what is the case—what state of affairs actually exists. See the “is/ought gap” for more information.
ousia – Greek for “being,” “substance,” or “essence.”
outer sense – Sense perception used to experience the external world, such as through the five senses (touch, taste, sound, smell, and sight). See “perception” for more information. “Outer sense” can be contrasted with “inner sense.”
overconfidence effect – The cognitive bias defined by the tendency of people to systematically have a false sense of certainty. For example, we systematically think our answers on tests are more likely true than they really are. We might think every answer we give on a test has a 90% chance of being correct when we actually only got 50% of the correct answers.
overdetermination – When there’s more than one sufficient cause for a state of affairs. For example, one grenade explosion would be sufficient to kill a person, and two simultaneous grenade explosions could overdetermine someone’s death. Overdetermination is a potential problem with some interactionist theories of the mind-body interaction—If physical reality can sufficiently determine how the body will move, then the idea of the mind also causing the body to move would be an example of overdetermination.
overman – A superior kind of human being. Friedrich Nietzsche argues that we should try to become or create an “overman”—a person who will create new values and be life-affirming to the point of desiring an “eternal return.”
overprecision – A fallacy committed by an argument that requires precise information for the premises in order to reach the conclusion, and it uses misleadingly precise premises in order to do so. For example, a person was told that a frozen mammoth was five thousand years old five years ago, so she might insist that the frozen mammoth is now “5,005 years old.”
pantheism – The view that god is the universe.
panpsychism – The view that all physical things or particles have a psychological element.
paradigm – A comprehensive understanding of a domain or a comprehensive worldview. There could theoretically be two paradigms that proponents claim to be “more justified” than the other because each paradigm could have different principles that determine what counts as good justification. Paradigms are thought to influence how we interpret our experiences and how we will respond to our observations.
paradox – (1) An apparent contradiction that challenges our assumptions. (2) A statement or group of statements that leads to a contradiction or defies logic. A paradox could contain a statement that can’t be true or false because they both lead to a contradiction. Consider the following sentence: “This sentence is false.” If it’s false, then it’s true. If it’s true, then it’s false. There’s a contradiction either way.
parsimony – Metaphysical simplicity, or having few entities in a metaphysical system. See “Occam’s razor” for more information.
particular – Actual concrete objects and things in the world. For example, a rock, a dog, and a person.
partners in crime – A synonym for “partners in guilt.”
partners in guilt – A defense of a theory or belief against an objection that points out that the alternatives face the exact same objection. Sometimes one theory or hypothesis can’t be rejected on some basis because the alternative theories or beliefs have exactly the same flaws. For example, Einstein’s theory of physics faces certain anomalies, such as dark energy; but all alternative theories of physics we know about have even more anomalies.
partonomy – A synonym for “meronomy.”
per se – A Latin phrase meaning “in itself” or “without qualification.” People generally use the phrase “per se” to refer to what something is not. (e.g. “The President is not a communist per se, but he does want to increase taxes.”)
perception – Experiences caused by the five senses—sight, sound, touch, taste, and smell. Perception causes unified experiences that we interpret as giving us information about the world.
perdurance theory – See “perdurantism.”
perdurantism – The view of persistence and identity that states that a persisting thing only partly exists at any given moment, and it’s entire existence must be understood in terms of its existence at every single moment that it exists. Perdurantism states that each persisting thing has distinct temporal parts throughout its existence in addition to having spatial parts. See “temporal parts” for more information. “Perdurantism” is often contrasted with “endurantism.”
perdure – For a single thing to only partly exist at any given moment in time, and for its full existence to require a description of it at every single moment in time that it exists. How a thing can persist and be the same thing according to “perdurantism.” See “perdurantism” for more information.
perfect duty – An obligation that requires certain behavior with no room for personal choice. For example, we have a perfect duty not to kill people just because they make us angry. “Perfect duties” are contrasted with “imperfect duties.”
permissible – Beliefs that are compatible with epistemic normative requirements, or actions that are compatible with moral requirements. They are allowed or optional, but not required. “Permissible” actions and beliefs are often contrasted with “obligatory” and “impermissible” ones.
perspectivism – The view that there are multiple ways to reasonably interpret our experiences (based on one’s perspective), but some perspectives can be more justified than others.
perlocutionary act – A speech act with an intended consequence or function. For example, the prelocutionary act of asking for salt at a dinner table is to get someone to pass some salt.
permitted – See “permissible.”
person – A rational being similar in key ways to being a human being. For example, Spock from Star Trek would be a person, even though he is not a human being. Some philosophers argue that dolphins and great apes are also persons.
persuasion – Attempts to convince people that something is true.
petitio principii – Latin for “assuming the initial point.” Refers to the “begging the question fallacy.” (See “begging the question.”)
phenomenon – An observation of an object or state of affairs. For example, seeing a light turn on at a neighbor’s house.
phenomenal world – The experience we have of the world, or the way we understand the world based on our experiences.
phenomenalism – The view that experience or sensations provide no evidence of external objects.
phenomenology – The philosophical study of our mental activity and first person experiences. Phenomenology can help us know what it’s like to be a person or have certain experiences.
philodoxer – “A lover of opinion.” Philodoxers love their own opinion more than the truth. They are contrasted with “philosophers” who love the truth more than their own opinion. Philodoxers are more close-minded than philosophers.
philosopher – (1) “A lover of wisdom.” Used as a contrast to “sophists” who claim they are wise and “philodoxers” who love their own opinion more than the truth. (2) A lover of learning. Someone who spends a great deal of time to learn and correct her beliefs. (3) A professional who is highly competent regarding philosophy, and spends a lot of time teaching philosophy or creating philosophical works.
philosophy – (1) Literally means “love of wisdom.” The quest to attain knowledge and improve ourselves. It generally refers to various domains of study that involve systematic attempts to greater understanding while attempting to be reasonable other than those domains that have been designated to mathematicians or scientists. Arguments and theories concerning the proper domain of philosophy is known as “meta-philosophy.” (2) In ordinary language, ‘philosophy’ refers to opinions regarding what’s important in life or how one should conduct oneself. For example, a person might say, “A penny saved is a penny earned—that’s my philosophy.”
philosophical logic – Logical domains with a strong connection to philosophical issues, such as modal logic, epistemic logic, temporal logic, and deontic logic. “Philosophical logic” can be contrasted with “philosophy of logic.”
philosophy of logic – A philosophical domain concerned with issues of logic. For example, questions involving the role of logic, the nature of logic, and the nature of critical thinking. “Philosophy of logic” can be contrasted with “philosophical logic.”
phronesis – Greek for “wisdom.” Aristotle uses it to refer to “practical wisdom.”
physical – Objects that are causally related to reality as it’s described by physicists as consisting of particles and energy. For example, tables, chairs, animal bodies, and rocks.
physical anti-realism – The view that physical reality (i.e. the natural world) does not really exist (or is less real than some ultimate reality), but that our experiences of the world could still be useful to us. See “idealism” for more information. “Physical anti-realism” can be contrasted with “physical realism.”
physical contingence – The status of propositions that describe a physical state of affairs that is compatible with the laws of nature. For example, consider the physically contingent proposition—“Water can boil.” This statement describes something that’s physically contingent because it describes a situation that we know to be compatible with the laws of nature. It sometimes happens, but it does not always happen. See “physical modality” for more information.
physical impossibility – The status of propositions that describe a physical situation or entity that is incompatible with the laws of nature. For example, consider the plausibly physically impossible statement—“Human beings can jump to the moon.” This is a plausible example of a physically impossible statement because we have reason to believe that the laws of nature and physical abilities of human beings are incompatible with a human being jumping to the moon. See “physical modality” for more information.
physical modality – The status of propositions that describe physical situations or entities given the laws of nature—physically contingent statements are true if they are compatible with the laws of nature, physically necessary statements are those that that describe situations or entities that always exist because of the laws of nature, and physically impossible statements are true when they describe situations or entities that can’t exist because of the laws of nature. For example, scientists say it’s physically impossible to go faster than the speed of light while in a material form.
physical necessity – The status of propositions that describe a physical situation or entity that must happen because of the laws of nature. For example, consider the plausibly physically necessary statement—“Objects will fall when dropped while ten feet from the surface of the Earth.” This is a plausible example of a physically necessary statement because it describes something that seems fully determined to happen given the laws of nature. See “physical modality” for more information.
physical possibility – (1) The status of a proposition that could be physically contingent or physically necessary. For example, it’s physically possible for a human to jump over a small rock or for light to move at 299,792,458 meters per second. (2) Sometimes “physical possibility” is a synonym for “physical modality.”
physical realism – The view that physical reality (i.e. the natural world) exists. Physical realists deny that there is a reality that is more real than physical reality, that physical reality exists in the mind of God, etc. “Physical realism” can be contrasted with “physical anti-realism.”
physicalism – The view that nothing exists other than physical reality, but not necessarily restricted to the reality as described by physicists. Some physicalists think that chemistry, biology, and psychology describes reality as well, even though physicists don’t study these things. “Physicalism” is often taken to be a synonym for “materialism.”
Pierce stroke – A symbol used in formal logic to mean “neither this-nor-that” or “not-a and not-b.” (“a” and “b” are any two propositions.) The symbol used is “↓.” For example, “all dogs are lizards ↓ all dogs are fish” means that “it’s not the case that all dogs are lizards, and it’s not the case that all dogs are fish.” See “formal logic” and “logical connective” for more information.
Platonic Forms – A non-natural, eternal, unchanging part of reality. Plato viewed this part of reality as consisting of “ideals.” We could find out the ideal right, ideal justice, ideal good, and so on. These ideals are the part of reality we refer to when we make moral assertions. Some philosophers accept that “abstract entities” of some sort exist (perhaps for numbers) without wanting to accept all of the traditional views regarding Platonic Forms. (These philosophers can be called “platonists” with a lower-case “p.”)
platonism – The view that Platonic forms or abstract entities exist. See “Platonic Forms” for more information.
plausible – A statement is plausible if it is likely true or highly intuitive given the current evidence.
pluralism – (1) The view that some topic or issue requires multiple irreducible things. (2) The metaphysical view that reality can’t be ultimately reduced to just one thing. Perhaps mind, matter, and abstract entities are all separate irreducible parts of reality.
political libertarianism – The view that we should have limited government, very limited or no government welfare, very little to no government regulation of the economy, and free market capitalism. Libertarians sometimes say there are ultimately only two moral principles: (a) the principle of non-injury and (b) the right to property. We could then know what is right or wrong in every situation using these two principles.
politics – The domain concerned with laws, power over the public sphere, and governments.
polyadic predicates – A predicate that applies to two or more things. For example, “John is taller than Jen” could be expressed as “Tab.” (In this case “T” stands for “is taller,” “a” stands for “John” and “b” stands for “Jen.”
polytheism – The view that more than one god exists.
positive argument – A series of statements meant to support a conclusion rather than oppose a belief or argument. For example, “We should care about others because they can be happy or suffer” is a positive argument. “Positive arguments” are often contrasted with “objections” (i.e. “negative arguments”).
positive categorical proposition – A categorical proposition with the form “all a are b” or “some a are b.” For example, “some mortals are men.”
positive conclusion – A synonym for “affirmative conclusion.”
positive liberty – The power and resources necessary to have the freedom to do certain things. For example, we are have the positive liberty to live if we have the necessary food and medical care. Positive freedom could require internal traits, such as critical thinking skills and absence of addiction. “Positive liberty” is often contrasted with “negative liberty.”
positive premise – A synonym for “affirmative premise.”
positive rights – Rights to various goods or services. For example, some philosophers argue that the right free education is an example of a positive right. Positive rights are often contrasted with “negative rights.”
possibility – (1) A modal domain involving what is contingent, possible, necessary, and impossible. See “metaphysical modality,” “physical modality,” or “logical modality” for more information. (2) The property of not being impossible. For example, it is physically possible for a human being to jump a foot off the ground, but it’s physically impossible for a human being to jump to the moon.
possible world – A concept that contrasts what actually exist to what could exist assuming that there’s a sense that the laws of physics could have been different (or not exist at all). The concept of possible worlds is used in order to help us understand the difference between metaphysically contingent, metaphysically possible, metaphysically necessary, and metaphysically impossible propositions. We can say metaphysically contingent statements are true in some possible worlds and not others, metaphysically possible statements are true in at least one possible world, metaphysically necessary statements are true in all possible worlds, and metaphysically impossible statements are never true in a possible world. For example, many philosophers argue that the laws of logic exist in every possible world, and they would be metaphysically necessary as a result. See “metaphysical modality” for more information.
post hoc ergo propter hoc – Latin for “after this, therefore because of this.” A logical fallacy that is committed when an argument concludes that something causes something else just because the first thing happened before the second thing (or always happens before the other thing). For example, we shouldn’t conclude that breathing causes people to die just because people always breathe before they die. Also related is the “cum hoc ergo propter hoc” fallacy.
post-hoc justification – A justification given for a belief that we already have. Although we often have a hard time explaining why our beliefs are justified, even if we know they clearly are, post-hoc justifications generally do not explain why we actually have a belief. As a result, they often exist to persuade or even manipulate others into sharing our belief. Post-hoc justifications are often motivated by bias rather than a genuine interest in the truth, and they are often rationalizations rather than genuinely good arguments. For example, people have been shown to be generally repulsed by consensual incest and they have an intuition that consensual incest is wrong, but most of the arguments they give against consensual incest are rationalizations. See “rationalization” for more information.
postmodernism – (1) A philosophical domain that is often characterized by the attempt to transcend labels, skepticism towards philosophical argumentation, and caution concerning the potential hazardous effects philosophy can have on everyday life. (1) In ordinary language, ‘postmodernism’ refers to a perspective associated with views that “everyone’s beliefs are equal,” that “effective philosophical reasoning is impossible,” and that “morality is relative.”
poststructuralism – The view that deconstruction is an important way to understand literary works, and that we should be skeptical of the idea that we can fully understand the meaning behind a literary work. Poststructuralists often claim that the “signifier” and “signified” are dependent on a culture or convention, and that understanding language requires a reference to parts of the language. For example, we can define words in terms of other words, so we might have to know hundreds of words of a language before we can use it to communicate well. Poststructuralists sometimes agree that that meaning can be best understood as what words don’t refer to and how they differ from other words within a language.
postulate – An axiom or belief assumed for the sake of argument. For example, many argument about the world require us to assume that an external world exists.
practical – Issues that concern real-life consequence rather than abstractions that can’t make a difference to our lives. Practical philosophy often concerns how we should live and what decisions we should make. Ethics is the most practical philosophical domain.
practical rationality – (1) Proper thinking involving means-end reasoning or ethical reasoning. “Practical rationality” determines how we ought to do “practical reasoning.” For example, a person who jumps up and down to fall asleep is likely being irrational in this sense. However, a person who lays down in a bed and closes her eyes in order to go to sleep is likely being rational. (2) According to some philosophers, such as Immanuel Kant, practical rationality covers ethical reasoning in addition to means-end reasoning. A moral action is more rational than the alternatives, and immoral actions could be said to be “irrational.”
practical reason – (1) Means-end reasoning. Reasoning that we use in order to know how to effectively accomplish our goals. For example, to eat food to alleviate hunger would be seem to be an appropriate way to use practical reasoning. (2) Ethical reasoning. Reasoning that determines what actions we should do, all things considered. For example, it could be considered to be appropriate to decide to give to a charity, but it could not be considered to be appropriate to decide to murder people.
practical wisdom – Knowledge about how to achieve goals and live a good life. Aristotle contrasted “practical wisdom” with “theoretical wisdom.”
pragmatic – Things that concern what is useful or practical.
pragmatic theory of justification – The view that what is true is actually what is useful to a person. What we should consider to be a “justified belief” is based on what helps us make predictions or live a better life.
pragmatic theory of truth – The view that good justifications are those that are useful to us. What we should consider to be “true beliefs” is based on what helps us make predictions or help us live better lives in some other way.
pragmatism – See “pramagic theory of truth” or “pragmatic theory of justification.
praiseworthy – Actions done by morally responsible people that are better than what we would reasonably expect, or that achieve more good than is morally required. See “responsibility” and “supererogatory” for more information. “Praiseworthy” actions can be contrasted with “blameworthy” ones.
predestination – (1) The view that a deity determines everything that happens, usually thought to be “for the best.” Predestination in this sense is thought to logically imply that determinism is true, and it has inspired debates over “free will” for that reason. See “divine providence” for more information. (2) In ordinary language, predestination is often used as a synonym for “fate” or “destiny.”
predetermination – See “predestination.”
predicate calculus: A synonym for “predicate logic.”
predicate constants – Constants in predicate logic are specific things that are predicated. The lower case letters “a, b, [and] c” are commonly used. For example, consider the statement, “George Washington is an animal.” In this case we can write this statement in predicate logic as “Ag” where “A” means “is an animal” and “g” stands for “George Washington.” In this case “g” is a constant because it refers to something specific that’s being predicated. Sometimes variables are used instead of constants. For example, “Ax” means “x is an animal” and “x” can be anything. See “predicate logic,” “predicate variables,” and “predicate letters” for more information. “Predicate constants” can be contrasted with “logical constants.”
predicate letters – Capital letters used in predicate logic are used as symbols for predicates, and the letters generally used are F, G,and H. For example, “F” can stand for “is tall.” In that case “Fx” is a statement that means “x is tall.” See “predicate logic,” “relation letters,” “predicate constants,” and “predicate variables” for more information.
predicate logic – Formal logical systems that include quantification. For example, predicate logic allows us to validly infer that “there is at least one thing that is both a dog and a mammal because all dogs are mammals, and there is at least one thing that is a dog.” This argument could not be validly inferred using propositional logic. See “quantifier” for more information. “Predicate logic” can be contrasted with “propositional logic.”
predicate term – A synonym for “major term.”
predicate variables – Variables in predicate logic are things that are predicated without anything in particular being mentioned. The lower-case letters “x, y, [and] z” are usually used. For example, consider the statement “it is an animal.” In this case “it” is something, but nothing in particular. It could be Lassie the dog, Socrates, or something else. We could write “it is an animal” in predicate logic with a variable as “Fx.” In this case “F” means “is an animal” and “x” is the variable. See “predicate logic,” “predicate letters” and “predicate constants” for more information. “Predicate variables” can be contrasted with “propositional variables.”
preference – To have a greater desire for something or to value something more than something else. For example, many people have a preference to live rather than to die.
preferable – (1) What a person would rather have than something else, or what a person desires or values more than something else. For example, many people agree that it’s preferable to experience pleasure than pain. (2) A better or more valuable option. For example, many people believe that pleasure has a positive value and would think it’s preferable for people to experience more rather than less pleasure.
premise – A statement used in an argument that is used in order to give a reason to believe a conclusion. For example, consider the following argument—“We generally shouldn’t hurt people because it’s bad for people to suffer.” In this case the premise is “it’s bad for people to suffer.” Keep in mind that many arguments have more than one premise. “Premises” are often contrasted with “conclusions.”
premise indicator – A term used to help people identify that a premise is being stated. For example, “because” or “considering that.” See “premise” for more information.
prescriptive – What is advisable or preferable. For example, “you shouldn’t steal from people” is a prescriptive statement. “Prescriptive” is often equated with “normative” and it can be contrasted with “descriptive.”
prescriptivism – The view that moral judgments are not true or false (and don’t refer to facts or real-world properties). Instead, they refer to “prescriptions” (imperatives or commands). For example, the judgment that “stealing is wrong” might actually mean “don’t steal!” Prescriptionism is an “anti-realist noncognitivist meta-ethical” theory.
prima facie – Latin for “at first face” or “at first sight.” ‘Prima facie’ refers to something that counts as a consideration in favor of something but can be overridden. For example, prima facie evidence is a reason to believe something is true, but there can be better reasons to believe it’s false. The fact that Copernicus’s theory of the Sun being the center of the solar system was simpler than the alternative of the Earth being at the center was prima facie evidence that his theory better described the world.
primary social goods – Goods that everyone values. John Rawls suggests that liberty, opportunity, income, wealth, and sources of self-respect should be included in this category.
primary qualities – The physical qualities an object has, such as extension, shape, size, and motion. John Locke thought that primary qualities could be known to be the qualities an object actually has rather than the subjective way we experience the object. John McDowell argued that primary qualities can be described in ways other than the way we perceive them. For example, we can describe the shape of an object using mathematics. “Primary qualities” are contrasted with “secondary qualities.”
prime mover – Arisotle’s understanding of a “first cause” or the god that makes motion possible. It’s something that can cause things to happen without being caused to do so. Many people think God is a prime mover that created the universe, and the universe couldn’t exist unless it was created in this way. However, Aristotle actually thought that the universe always existed.
primitive concept – A concept that can’t be properly defined or understood in terms of other concepts. For example, G. E. Moore argued that the concept of “goodness” is primitive and that it could not be defined in terms of other concepts. Primitive concepts might need to be understood in terms of examples. “Primitive concepts” can be contrasted with “definable concepts.”
primitives – The building blocks of thought or reality. Ontological primitives are the building blocks of reality, such as subatomic particles. Concepts that must be presupposed for a theory or logical system are primitives. For example, the law of non-contradiction could be a primitive.
principal attributes – The essence or defining characteristics of substances. According to René Descartes, extension is the principal attribute for matter and thought is the principle attribute for mind.
principle – (1) A law, rule, or brute fact. For example, the law of non-contradiction is a plausible example. (2) A guiding rule or value. What people call “moral principles.” For example, the principle that states that we generally shouldn’t harm others.
principle of charity – See “charity.”
principle of bivalence – The logical principle that states that all propositions have one truth-value, and they are either true or false. The principle of bivalance is similar to “the law of excluded middle,” but that law does not guarantee that all propositions are true or false. Some philosophers reject the principle of bivalence and argue that there can be more than two truth-values. Perhaps some propositions could be indeterminate or have degrees of truth. The “principle of bivalence” is similar to the “law of excluded middle.”
principle of parsimony – The principle that states that simplicity is a feature that can count in favor of a theory. We should prefer a simpler theory if it is otherwise just as good as explaining the relevant phenomena as another theory. Copernicus’s theory that the Sun is the center of the solar system was simpler than the alternative and could make predictions just as well, so we had a reason to prefer his theory to the best alternative that was available at the time. See “Occam’s razor.”
principle of sufficient reason – The principle that states that everything that exists has a sufficient explanation as to why it exists rather than something else. For example, we might think that everything that happens has a causal explanation as to why it happens rather than something else. Many philosophers reject this principle and think it’s possible that there are “brute facts” that have no explanation.
principle of utility – The moral principle that states that we ought to seek the greatest good for the greatest number. The principle of utility is generally understood as equating “goodness” with “pleasure” and “harm” with “pain.” Therefore, the greatest good for the greatest number is meant to be the most pleasure for the greatest number, and the least pain for the greatest number. We can try to determine how much pleasure and pain is caused by various choices we can make to determine which choices are best—an action will be right insofar as it causes more happiness than the alternatives. For example, killing people would be taken to be generally wrong because it generally causes people more suffering than alternative courses of action. See “utilitarianism” for more information.
probabilism – The view that the degrees of confidence we have for various beliefs ought to be based on probability calculus. Probabilism states that we often lack certainty, but we should still try to believe whatever is likely true. For example, we ought to be confident that we won’t roll a six when we roll a six-sided-die, but we should not be confident that we will roll a two. See “psychological certainty” and “probability calculus” for more information.
probability calculus – Mathematical rules that determine the odds of various propositions being true. For example, the probability of a tautology being true is 100%, and the probability of a contradiction being true is 0%. Also, the odds of two unproven propositions being true is lower than merely one of the two being true.
probability distribution – A list of possible outcomes and the odds of each outcome of occurring, which is often related to what decision we should make. For example, the odds of a day being sunny could be 40% and the odds of that day being rainy could be 60%.
problem of evil – The question about how a divinity could exist if evil exists. It is sometimes thought that a divinity exists that’s all-powerful, all-knowing, and all-good, but that would imply that the divinity would assure us that less evil exists than actually exists. For example, it is plausible that such a divinity would not make lead such a convenient yet poisonous metal that would take us thousands of years to discover to be poisonous.
problem of induction – The fact that induction appears to be difficult or impossible to sufficiently justify with argumentation. It is argued that induction can’t be sufficiently justified by the fact that it was reliable in the past because that would require a circular argument (the assumption that induction is reliable); and it also seems implausible to think that induction can be justified as being self-evident. See “induction” for more information.
process metaphysics – A synonym for “process theology.”
process theism – The belief that God evolves as part of a process into a more perfect being.
process theology – The philosophical systematic attempt to understand or speculate about “process theism.”
projectionism – (1) The view that moral judgments are based on our emotions, but we experience those emotions as being objective facts or properties of external reality. For example, to see a small child be tortured could be observed as an immoral act, but the projectionist would argue that the observation would actually just reflect the fact that the observer has some other negative emotional experience directed towards an action. (2) The view that we take something to be an objective fact or property of external reality, but it is not actually an objective fact or property. Instead, we are projecting our own attitudes or emotions onto things. For example, some people believe that we project colors onto the world (that aren’t really there) when we talk about red cars and green grass.
proof – (1) An argument that supports a belief. It’s often taken to refer to a sufficient reason to agree with a belief. See “positive argument” for more information. (2) The evidence for a belief. It’s often taken to refer to sufficient evidence for a belief. See “justification” for more information.
proof by absurdity – A synonym for “indirect proof.”
property – An attribute, element, or aspect of something. Examples of properties include green, soft, valid, and good.
property dualism – The view that things have up to two different kinds of basic properties: physical and mental. For example, a single event could be described in terms of physical properties (such as a certain brain state) and psychological properties (such as experiencing pain). Property dualists are not substance dualists because they don’t think the mind and body are made of different kinds of stuff.
proposition – A truth claim or the conceptual meaning behind an assertion. The statement, “Socrates is a man and he is mortal” contains two propositions: (a) Socrates is a man and (b) Socrates is mortal. Propositions are not statements because there can be multiple statements that refer to the same proposition. For example, “Socrates is a man and he is mortal” and “Socrates is mortal and he is a man” are two different statements that refer to the same proposition.
proposition type – Different logical forms categorical propositions can take. There are four proposition types: A, I, O, and E. Each of these refers to a different logical form: (A) all a are b, (I) some a are b, (O) some a are not b, and (E) no a are b.
propositional calculus – A synonym for “propositional logic.”
propositional connective – A synonym for “logical connective.”
propositional logic – A formal logical system that reduces arguments to propositions and logical connectives. For example, the statement “All dogs are mammals or all dogs are reptiles” becomes translated as “A or B.” (The logical connective is “or.”) Propositional logic lacks “quantification.” “Propositional logic” can be contrasted with “predicate logic.”
propositional letter – Symbols used to stand for specific propositions in propositional logic. Upper-case letters are often used. For example, “A” can stand for “George Washington was the first President of the United States.” “Propositional letters” can be contrasted with “propositional variables.” See “propositional logic” for more information.
propositional variable – Symbols used in propositional logic to represent propositions. Capital letters tend to stand for specific propositions. For example, “A” could stand for “Socrates is mortal.” Lower-case letters or Greek letters tend to stand for any possible proposition. For example, “a” could stand for any possible proposition. “Propositional variables” can be contrasted with “predicate variables.” See “propositional logic” for more information.
prove – To give proof for a conclusion. It is often used to refer to sufficient evidence to believe something, but sometimes it only refers to the requirement to support our beliefs within a debate. See “proof” for more information.
providence – See “divine providence.”
psychological – The domain that includes thoughts, feelings, experience, the first-person perspective, semantics, intentionality, and qualia. It is often said that psychological activity occurs in a mind. “Psychological” reality is often contrasted with “physical reality.”
psychological certainty – The feeling of some degree of confidence about a belief. To be psychologically certain that something is true is to feel highly confident that it’s true. For example, a person might feel absolutely confident that trees really exist and later find out that our entire world takes place within a dream.
psychological egoism – The view that people can only act in their perceived self-interest. For example, a person could not give money to the poor unless she believed it would benefit her somehow. (Perhaps it could improve her reputation.)
public reason – John Rawls’s concept of reason as it should exist to justify laws and public policy. Ideally, everyone should be able to rationally accept the laws and policies no matter what their worldview is, so laws and public policies should be justified in secular ways that don’t require acceptance of controversial beliefs. Public reason does not require controversial religious beliefs or a comprehensive worldview precisely so it can help assure us that every reasonable person would find the laws and public policies to be justified—even if they have differing worldviews.
pure intuition – An a priori cognition. According to Immanuel Kant, a pure intuition is the way we know about “non-discursive concepts,” such as space and time.
pyrrhonism – The philosophy of an ancient group of philosophers known as “the skeptics.” They didn’t know if we could know anything and thought that we should suspend judgment as a result—to neither believe nor disbelieve anything. They argued that we should even suspend judgment concerning the belief that “we can’t know anything.”
qualia – The “what it’s like” or qualitative description of subjective experience. For example, the taste of chocolate, feel of pain, the appearance of green, and so on.
QED – See “quod erat demonstrandum.”
quantificational logic – A synonym for “predicate logic.”
quantifier – (1) A symbol used in logic to designate a quantity or modality. Quantifiers are used to make it clear if a proposition concerns all of something, not all of something, something that actually exists, or something that doesn’t actually exist. The two main quantifiers in logic are “∀” that stands for “all” and “∃” that stands for “exists.” For example, “∃x(Fx and Gx)” means that there is an x that is an F and a G. For example, there is something that’s a dog and a mammal. See “modal quantifiers” and “deontic quantifiers” for more information. (2) A word used to designate quantity, such as “all” and “some.” For example, “all people are rational animals” uses the “all” quantifier.
quasi-realism – (1) A view that there are no moral facts that tries to make sense out of our moral language and behavior. For example, some quasi-realists are emotivists, but they argue that moral judgments can in a sense be true or false. For example, quasi-realist emotivists could agree that saying, “Stealing is wrong” does not merely express a dislike of stealing insofar as people use it to make assertions. Quasi-realism is meant to be more sensitive to our common sense and intuitions than the alternatives. Quasi-realism can be compatible with multiple anti-realist meta-ethical theories. Also see “fast track quasi-realism” and “slow track quasi-realism.” (2) An anti-realist position (that rejects that some type of facts exist) that attempts to be more sensitive to our common sense beliefs and intuitions by explaining why our language about something seems factual, but is not actually factual.
quater – A chemical that is functionally equivalent to water that quenches thirst, boils at the same temperature, and so on; but it’s not H2O. Quater was part of a thought experiment and was used to argue that it’s intuitive to think water is more than a chemical with certain functions—the chemical composition of water is part of what it is. See “Twin Earth” for more information.
questionable analogy – See “weak analogy.”
quod erat demonstrandum – Latin for “that which was to be demonstrated.” It roughly means “therefore” and is used to refer to a conclusion of a proof. It’s often abbreviated as “QED.”
quoting out of context – When someone uses a quotation to support a belief when the quote put into the proper context wouldn’t support that belief after all. For example, Elizabeth could say, “We know that UFOs exist, but we don’t know that aliens visit the Earth from other planets” and Tony could quote her as saying, “We know UFOs exist” to give others the impression that Elizabeth believes that aliens visit the Earth. See “one-sidedness” for more information.
randomness – See “epistemic randomness” or “ontological randomness.”
rational persuasion – An attempt to persuade others that something is true through well-supported valid argumentation.
rationalism – (1) The view that there are non-tautological forms of knowledge or justification other than observational evidence. For example, the laws of logic are a plausible example of something we can justify without observational evidence that’s not tautological. (2) The view that we should try to reason well and form beliefs based on the best evidence available.
rationality – (1) At minimum, the ability to draw logical conclusions from valid arguments and avoid contradictory beliefs. Rationality could also refer to using effective methods to accomplish our goals or even to effective reasoning in general. (2) The field concerned with what we ought to believe. We ought to believe conclusions that are well justified and disbelieve conclusions that are well refuted. For example, the belief that at least five people exist is well justified, so we ought to believe it (and it would be rational to believe it). We could say that we ought not disbelieve that at least five people exist (and it would be irrational to disbelieve it). (3) “Rationality” is often equated with “reasonableness.”
rationalization – Nonrational arguments given to believe something without a genuine concern for what’s true. Rationalizations are meant to superficially appear to be genuinely good arguments, but they fail on close examination. For example, a person who believes that the Earth is flat, and is told that we have pictures of the Earth from space and we can see that it’s round could rationalize that the pictures are probably fake. A great deal of philosophical writing could be closer to rationalization than to genuinely good argumentation, but rationalization plagues everyday thought and can be difficult to avoid. See “moral rationalization” and “post-hoc justification” for more information.
realism – A domain that is taken to be factual (part of the real world), and not merely a social construction or conventional. For example, “moral realism” is the view that there is at least one moral fact that is not determined by something like a social contract. “Realism” is often contrasted with “anti-realism.”
reasonable pluralism – Disagreement among people who have reasonable yet incompatible beliefs. A plausible example is of a person who believes that intelligent life exists on another planet and another person who doesn’t think life exists on another planet. John Rawls coined this phrase because he believed that society should fully embrace cultural diversity involving various worldviews and religious beliefs insofar as such religious beliefs and worldviews can be reasonably believed—the evidence we have for many of our beliefs is inconclusive, but it can be reasonable to have the beliefs until they are falsified (or some other standards of reason are violated).
reasonableness – To hold beliefs that are sufficiently justified and reject beliefs that are insufficiently justified. The ability to reason well and behave in accordance to reasonable beliefs. “Reasonableness” is often equated with “rationality.”
reasoning – The thought process that leads to an inference. For example, a person who knows that all dogs are mammals and that Lassie is a dog can come to the realization that Lassie is a mammal. Reasoning that’s made explicit along with the conclusion are “arguments.” One potential difference between reasoning and arguments is that reasoning does not necessarily include the inference, but arguments must include a conclusion. Everything we say about reasoning or arguments tends to correspond to both. For example, fallacious arguments corresponds to fallacious reasoning of the same type, logically valid arguments has a corresponding logically valid reasoning, etc. Moreover, inductive and deductive types of arguments correspond to inductive and deductive types of reasoning.
rebuttal – See “objection.”
red herring – A fallacious kind of argument that is meant to distract people from arguments and questions made by the opposing side. These kinds of arguments are meant to derail the conversation or change the subject. For example, a politician might be asked if we should end our wars, and she might reply, “What’s really important right now is that we improve the economy and create jobs. We should do that by lowering taxes.”
redistribution of wealth – To take wealth away from some people and give it to others. It is sometimes thought that it is morally justified to tax the wealthy to provide certain services for the poor. For example, many people insist that Robin Hood is a hero because he risks his well being to take from the rich to give to the poor (who would otherwise suffer from an unjust system).
redistributionism – The view that we should have “redistribution of wealth” (perhaps to take from the wealthy to help the poor).
reducibility – To be able to express everything from one logical system in another. For example, a propositional logical system with the logical connectives for “and” and “not” can state everything said by other logical connectives. Therefore, a system of propositional logic that has logical connectives for “not,” “and,” “and/or,” “implies,” and “if and only if” can be reduced to a system that only has connectives for “and” and “not.” See “expressibility” and “logical connectives” for more information.
reductio ad absurdum – Latin for “reduction to the absurd.” Also known as the “argument from absurdity.” It’s a form of argument that justifies why an argument or claim should be rejected insofar as it would have absurd consequences. For example, consider the following argument—“Stars exist; the Sun is a star; therefore Stars don’t exist.” This argument leads to an absurd consequence in the form of a logical contradiction (i.e. that something exists and doesn’t exist.)
reduction – To conclude or speculate that the parts of something are identical to the whole. For example, water is H2O, and diamonds are carbon molecules with a certain configuration. We often say that something is reducible to something else if it’s “nothing but” that other thing. For example, water is nothing but H2O.
reductionism – (1) Relating to identity theories or identity relations. For example, scientists think that water is identical with H2O. (2) The view that something is nothing but the sum of its parts. Some philosophers think that particles and energy (the reality described by physics) are the only real parts of the universe, and that the universe is actually nothing but physical reality as described by physicists.
redundancy – (1) When something is redundant or not needed. For example, secular ethics attempts to explain right and wrong without appealing to controversial religious entities—those entities would then be redundant to the explanation. This could be seen as an epistemic virtue of practical importance. It could be said that we should often hedge our bets by not having moral beliefs that depend on controversial entities that might not exist. Sometimes redundancy might help give us a reason to reject entities. (2) To have backup plans for our beliefs and arguments. For example, a conclusion could be redundant if we have several reasons to think it’s true. To refute one argument in favor of the conclusion would then be insufficient to give us a reason to stop believing the conclusion is true.
reference – (1) The objects that terms refer to. The terms “morning star” and “evening star” have different meanings—the “morning star” is the last star you can see in the morning and the “evening star” is the first star you can see at night. However, they both have the same reference (i.e. Venus). Gottlob Frege contrasted “reference” with “sense.” (2) A source of information used for citations. (3) Someone who can vouch for your qualifications.
reference fixing – An initial moment when someone names a thing (or type of thing). Reference fixing could involve a person pointing to an object that others perceive or by describing an object in order to make it clear what exactly is being given a name. For example, germs could have been described as whatever microscopic living organisms were causing people to get sick in certain ways. Reference fixing is often part of a “causal theory of reference.”
reference borrowing – Continuing a historical tradition of using a term to refer to something. Merely using a term for the same thing as someone else is to engage in reference borrowing. Reference borrowing is often part of a “causal theory of reference.”
reflective equilibrium – An ideal state when our beliefs and intuitions are consistent after deliberation and debate. Reflective equilibrium requires that we form beliefs based on our experiences and intuitions, and that we reject certain intuitive beliefs when they are incompatible with other more important intuitive beliefs or observations until we reach perfect coherence—when our beliefs and observations are all logically compatible and no longer contain contradictions. For example, some utilitarians could believe that we should sometimes kill innocent people to use their organs to save other lives because such a counterintuitive position is plausibly implied by utilitarian principles. A related concept to “reflective equilibrium” is “coherentism.”
refutation – An argument that opposes another argument or belief. It often refers to arguments that provide us with sufficient reason to reject a belief or argument. See “objection.”
refute – To disprove a belief or oppose an argument using another argument. It often refers to giving a sufficient reason to reject a belief or argument. For example, we can refute the belief that all crows are black by finding an albino crow.
regress – A solution that has the same problem it’s supposed to solve. For example, we might assume that everything needs to be created, and conclude that God created the universe; but our assumption will lead us to think that something needed to create God as well. Another example is to say that we only know something when we can justify it using an argument, but the argument will require that the premises of the argument also be known, and therefore we will need arguments for those premises as well. That can lead to an “infinite regress.”
reification – (1) Inappropriately treating something as an object, such as treating human beings as means to an end. For example, paying factory workers as little as possible and having them work in unsafe conditions just to make more profit. (2) To inappropriately think of abstract entities or concepts as concretely existing entities. For example, “courage” should not be thought of as a person.
relation letters – Predicate letters used in predicate logic that involve two or more things that are predicated. Capital letters are used to represent predicates, and “F, G, [and] H” are most commonly used. For example, “F” can stand for “attacks.” In that case “Fxy” is a statement that means “x attacks y.” See “predicate logic,” “predicate letters,” “predicate constants,” and “predicate variables” for more information.
relational predicate logic – A system of predicate logic that can express both monadic and polyadic predicates. See “monadic predicate” and “polyadic predicate” for more information.
relations of ideas – Statements that can be justified by (or true in virtue of) the definitions of words. For example, “all bachelors are unmarried” is a relation of idea, and we can justify the fact that it’s true by appealing to the definitions of words. Relations of ideas are said to be tautological and non-substantive. David Hume thought the only statements that could be justified are “relations of ideas” and “matters of fact.”
relativism – See “epistemic relativism” or “cultural relativism.”
relevance – To be appropriately related. What is said in a philosophical discussion or debate should be relevant in that it should appropriately relate to the primary topic of conversation. Certain arguments and certain beliefs are related to the topic of conversation, and are worth talking about in order to understand the topic or to know what we should believe regarding the topic. Extreme forms of irrelevance are off-topic. Additionally, objections must be properly related to the arguments and beliefs they oppose, and giving objections that are somewhat irrelevant could change the subject or be a fallacious “red herring.”
relevance logic – A logical system that requires more than the simple truth table for conditional statements. According to classical logic, “If all dogs are mammals, then gold is a metal” is true. However, this doesn’t seem to be true using ordinary language and relevance logic attempts to explain why. According to relevance logic, both parts of the conditional must be related in the right way.
reliabilism – The view that better justifications for beliefs are more reliable than other ones, and that justified beliefs are justified because they were formed by a reliable belief-forming process. Reliabilism generally stresses that justified beliefs are more likely true than the alternatives precisely because justifications assure us that our beliefs are more likely true than they would be otherwise.
religious humanism – The view that human interests are of primary importance rather than those of gods or supernatural beings, but while still endorsing a religion. Religious humanism states that the main importance of religion is in serving humans rather than in serving supernatural beings.
res cogitans – Latin for “thinking thing.”
res extensa – Latin for “extended thing.”
responsibility – (1) Being in control of one’s moral decisions. A person who is morally responsible can be legitimately praised or blamed for her moral actions. Moral responsibility requires a certain level of sanity, competence, and perhaps free will. It is plausible that small children and nonhuman animals lack responsibility because they might lack the competence required. Additionally, there are excuses that can temporarily invalidate a person’s moral responsibility, such as when people are harmed on accident or when a person is coerced into harming others. (2) To be morally required to act a certain way. For example, parents are responsible for caring for their children.
retribution – The justification for punishment that considers a criminal to deserve to be harmed. For example, we could say that a murderer deserves to die in order to justify using the death penalty against the murderer. Retribution is sometimes criticized as a form of vengeance.
retributive justice – A principle of justice that states that punishment as some form of harm is the appropriate response to crime insofar as criminals deserve to be harmed. Retributive justice often states that the punishment should be proportionate to (or the same as) the crime itself. For example, murder would be appropriately punished with the death penalty.
revealed theology – The systematic study of gods using information attained by revelation—direct communication with one or more gods or supernatural beings. “Revealed theology” is often contrasted with “natural theology.”
revisionary – Definitions or concepts that depart from the usual or intuitive associations we have with certain terms. For example, people who say that knowledge requires beliefs we can justify well using argumentation might contradict the ordinary understanding of knowledge in that we seem to know that “1+1=2” and yet we might not know how to justify it well using argumentation.
rhetoric – (1) Persuasion using ordinary language. In this sense both rational argumentation and fallacious argumentation could be considered to be forms of rhetoric. Rhetoric is the specialization of public speaking, persuasion used by lawyers, and oratory used by politicians. (2) Argumentation used for the purpose of persuasion. This type of rhetoric can involve technical terminology used by specialists. This type of rhetoric is compatible with both public speaking and essays written by philosophers. (3) Persuasion that uses nonrational forms of persuasion through language. Fallacious arguments, propaganda, and various forms of manipulation could be considered to be rhetoric in this sense. This type of rhetoric is thought to be a source of power for sophists, pseudoscience advocates, snake oil salesmen, and cult leaders.
rhetorical arguments – Arguments used for persuasion. Rhetorical arguments are thought to be very important in politics and in the court room. See “rhetoric” for more information.
right – (1) Correct or appropriate as opposed to “wrong.” (2) “Morally right” as opposed to “morally wrong.” (3) To be on the other end of an obligation. For example, to have the right to life means that other people are obligated not to kill you without an overriding reason to do so.
rigid designator – Something that refers to the same thing in all possible worlds and never refers to anything else. For example, some philosophers argue that water refers to H2O in every possible world.
Ross’s intuitionism – William David Ross’s ethical theory that requires us to accept meta-ethical intuitionism. He argues that there are intrinsic values and prima facie duties, but such values and duties can conflict. Additionally, we can’t rationally determine what we should do using moral theories in all circumstances precisely because values and duties can conflict.
rule utilitarianism – A form of consequentialism that states that we should rely on simplified rules in order to maximize goodness (positive value) and minimize harm (negative value). Rule utilitarians often equate “goodness” with “happiness” and “harm” with “suffering.” Rule utilitarianism is sometimes inspired by skepticism regarding our ability to know how to maximize goodness given our situation. Many people are willing to harm others “for a greater good” who don’t get the expected results they hoped for. “Rule utilitarianism” is contrasted with “act utilitarianism.”
rules of equivalence – A synonym for “rules of replacement.”
rules of implication – Rules that state when we can have a valid inference. The rules state that certain argument forms are valid, such as “modus ponens,” which states that the propositions “if a, then b” and “a” can be used to validly infer “b” (“a” and “b” stand for any two propositions.) See “valid,” “rules of implication” and “rules of equivalence” for more information.
rules of inference – Rules that state when we can have a valid inference. The rules state that certain argument forms are valid, such as “modus tollens,” which states that the propositions “if a, then b” and “not-b” can be used to validly infer “not-a” (“a” and “b” stand for any two propositions.) The rules also state when two statements are logically equivalent, such as “a” and “not-not-a.” See “valid,” “rules of implication,” and “rules of equivalence” for more information.
rules of replacement – Rules that tell us when two propositions mean the same thing. We can replace a proposition in an argument with any equivalent proposition. For example, we know that “all dogs are animals and all cats are animals” means the same thing as “all cats are animals and all dogs are animals” because of the rule known as commutation—“a and b” and “b and a” both mean the same thing. (“a” and “b” stand for any two propositions.)
salva veritate – Latin for “with unharmed truth.” It refers to potential changes to statements that will not alter the truth of the statement. For example, we could generally talk about “trilaterals” instead of “triangles” without changing the truth of our claims about them because they both refer to the same kind of mathematical object.
satisfiability – The ability to interpret a set of statements of formal logic in a way that would make them true. Statements that can all be simultaneously interpreted as true in this way are satisfiable. Consider the statement “a → b.” In this case “a” and “b” stand for any propositions and “→” stands for “implies.” We can interpret “a” as being “all bats are mammals” and “b” as being “all bats are animals.” In that case we can interpret the whole statement as saying “if all bats are mammals, then all bats are animals,” which is a true statement. Therefore, “a → b” is satisfiable. See “formal logic” and “interpretation” for more information.
saving the phenomena – Explanations that are consistent with our experiences or account for our experiences. For example, someone who claims that beliefs and desires don’t exist should tell us why they seem to exist as part of our experiences. Explanations that fail to save the phenomena are likely to be counterintuitive and inconsistent with our experiences.
schema – A synonym for “scheme of abbreviation.”
scheme of abbreviation – A guide used to explain what various symbols refer to for a set of symbolic logical propositions, which can be used to translate a proposition of symbolic logic into natural language. For example, consider the logical proposition, “A ∧ B.” A scheme of abbreviation for this proposition is “A: The President of the USA is a man; B: The President of the USA is a woman.” “∧” is used to mean “and/or.” We can then use this scheme of abbreviation to state the following proposition in natural language—“The President of the USA is a man or a woman.”
scientific anti-realism – The view that we shouldn’t believe in unobservable scientific entities, even if they are part of an important theory. For example, we do not observe electrons, but we know what effect they seem to have on things when we accept certain theories and models. Scientific anti-realists would claim that we don’t know if electrons really exist or not. One type of scientific anti-realism is “instrumentalism.”
scientific method – The way science makes discoveries, which involves hypotheses, observations experiments, and mathematical models. It is often thought to follow the “hypothetico-deductive method.”
scientific realism – The view that unobservable scientific entities are likely real as long as their existence is properly supported by the effects they have. For example, germs were originally unobservable, but scientists hypothesized that various diseases were caused by them. “Scientific realism” can be contrasted with “scientific anti-realism.”
scientism – The view that science is the best source of knowledge for everything. For example, anyone who agrees with scientism would likely think that morality is either nonfactual or that science is the best way for us to attain moral knowledge. The word ‘scientism’ is generally used as a pejorative to refer to inappropriate views that science can be best used to attain knowledge when there are better non-scientific methods to attain knowledge. The non-pejorative term that is often used in philosophy rather than ‘scientism’ is ‘epistemic naturalism.’
secondary qualities – Certain qualities an object has, such as color, smell, and taste. John Locke argued that secondary qualities only exist because we perceive them and they are not actually part of the object itself. Many people think this implies that secondary qualities are illusions, but John McDowell argues that they exist as part of our experience and are not illusory because illusions are deceptive, but there is nothing deceptive about secondary qualities. According to McDowell, experiences of secondary qualities don’t trick us into having false beliefs about reality. “Secondary qualities” can be contrasted with “primary qualities.”
secular – Without any religious requirement or assumptions. For example, the argument that “God dislikes homosexuality, so homosexuality is immoral” would require us to accept religious assumptions. Secular arguments are meant to be persuasive to people of every religion and to those who lack a religion.
secular humanism – A view that reason and ethics are of great importance, and that we should reject all supernatural beliefs. Secular humanism also states that human beings are of supreme moral importance, so ethical systems should primarily concern human welfare. “Secular humanism” can be contrasted with “religious humanism.”
secularism – (1) The separation of church and state. To remove religious domination or requirements from politics. (2) To separate religious requirements or domination from any institution or practice.
secundum quid – Latin for “according to the particular case.” It’s generally used to refer to the “hasty generalization” fallacy.
selective evidence – See “one-sidedness.”
selective perception – A cognitive bias defined by people’s tendency to interpret their experiences in a way consistent with (and perhaps as confirming) their beliefs and expectations. For example, liberals who experience conservatives giving bad arguments could take that experience as confirmation that conservatives don’t argue well in general. This bias is related to “theory-laden observation” and the “confirmation bias.”
self-contradiction – A statement is a self-contradiction when it can’t be true because of the logical form. For example, “Socrates is a man and he is not a man.” This statement can’t be true because it is impossible to be something and not that thing. It has the logical form, “a and not-a.” (“a” is any proposition.)
self-evidence – A form of justification based on noninferential reasoning or intuition. It is often thought that self-evidence is based on the meaning of concepts. Mathematical beliefs, such as “1+1=2” is a plausible example. If something is self-evident, then justification has come to an end. Self-evidence can help assure us that we can justify beliefs without leading to an infinite regress or vicious circularity. According to Robert Audi, understanding that a proposition is true because it’s self evident can require background knowledge and maturity, it could take time to realize that a proposition is self-evident, and beliefs that are justified through self-evidence could be fallible. This opposes the common understanding that self-evident propositions can be known by anyone, are known immediately, and are infallible.
self-defeating – (1) The property of a belief (or theory) that gives us a reason to reject itself (the belief or theory). For example, the verification principle states that statements are meaningless unless they can be verified, but it seems impossible to verify the verification principle itself. That would suggest that the verification principle is self-defeating because it’s meaningless. (2) The property of an action that undermines itself. A self-defeating prophecy makes it impossible to come true. For example, a prophecy that states that a person will die by driving a car could decide never to drive a car again (and will therefore avoid dying that way). In that case the prophecy failed to predict the future after all.
self-serving bias – The bias defined by people’s tendency to attribute their successes to positive characteristics they possess, and their tendency to attribute their failures to external factors that are outside of their control. For example, a person could think she does well on a test because of knowing the material, but that same person could say she failed another test because the test was too hard. This could be related to the “illusory superiority” bias.
semantic completeness – A logical system is semantically complete if and only if it can prove everything it is supposed to be able to prove. For example, propositional logic is semantically complete if and only if it can be used to determine whether any possible argument is valid.
semantic externalism – The view that the meaning of terms can be partially based on things external to our minds. For example, a semantic externalist would likely agree that no chemicals other than H2O could be water, even if they are functionally equivalent and cause the same experiences—quench thirst, boils when hot, etc.
semantic internalism – The view that the meaning of terms can be entirely based on things in our minds. For example, a semantic internalist would likely agree that we could find out that some chemical other than H2O is also water if it is functionally equivalent and causes the same experiences—it quenches thirst, boils when hot, etc.
semantics – The meaning of words or propositions. Some people are said to be “debating semantics rather than substance” when they argue about what terms mean rather than what is true or false regardless of how we define terms. “Semantics” is often contrasted with “syntax.”
sensation – (1) An experience caused by one or more of the five senses (sight, sound, touch, taste, and smell). (2) To experience qualia or a feeling.
sense – (1) What Gottlob Frege called “sinn” to refer to the meaning or description of a word. For example, “the morning star” and “the evening star” both have different senses, but refer to the same thing (i.e. the reference is the planet Venus). The sense of “the morning star” is “the last star we can see in the morning,” and the sense of “the evening star” is “the last star we can see at night.” Gottlob Frege contrasted “sense” with “reference.” (2) The ability to understand. For example, we might talk about someone’s good sense. (3) To perceive. For example, we might say that we sense people in the room when we can see them. (4) An ability of perception; such as sight, sound, touch, taste, and smell. These are said to be “the five senses.”
sense data – The experiences caused by sense perception (the five senses). Sense data can be understood without interpretation and exist exactly as they are experienced. The visual sense data of a green apple includes a visual experience consisting of a blotch of green.
sense perception – See “perception.”
sensible intuition – According to Immanel Kant, sensible intuition refers to the concepts required for experience. For example, space and time. Without those concepts it would be impossible to experience the phenomenal world. For example, visual experience would just involve blotches of color that would be impossible to interpret as being information about an external world.
sentential – The property of being related to sentences or propositions. For example, sentenial logic is a synonym for “propositional logic.”
sentential logic – A synonym for “propositional logic.”
sentimentalism – See “moral sentimentalism.”
set – (a) A group of things that all share some characteristic. For example, the set of cats includes every single cat that exists. (b) In Egyptian mythology, Set is the god of deserts, storms, and foreigners. Set has the head of an animal similar to a jackal, and he is known as “Sēth” in Ancient Greek.
Sheffer stroke – A symbol used in propositional logic to mean “not both” or “it’s not the case that a and b are both false.” (“a” and “b” are any two propositions.) The symbol used is generally “|” or “↑.” For example, “Dogs are mammals ↑ dogs are lizards” means that “it is not the case that dogs are both mammals and lizards.”
Ship of Theseus – A ship used as part of a thought experiment. Imagine a ship is slowly restored and all the parts are eventually replaced. This encourages us to ask the question—Is it the same ship?
signifier – A sign that conveys meaning. For example the word ‘red’ is a signifier for a color. Signifiers are contrasted with the “signified.”
signified – The entity, state of affairs, meaning, or concepts referred to by a sign. For example, the word ‘Socrates’ refers to an actual person (the signified). The “signified” is often contrasted with “signifiers.”
simpliciter – Latin for “simply” or “naturally.” It’s used to describe when something is considered without qualification. For example, torturing a helpless nonhuman animal is morally wrong simpliciter.
simplicity – (1) See “Occam’s razor.” (2) To lack complexity.
simplification – A rule of inference that states that we can use “a and b” as premises to validly conclude “a.” (“a” and “b” stand for any two propositions.) For example, “Socrates is a man and Socrates is mortal; therefore, Socrates is a man.”
sinn – German for “sense.”
skepticism – (1) Disbelief. Skepticism of morality could be the belief that morality doesn’t really exist. (2) A state of uncertainty. A skeptic about gods might not believe or disbelieve in gods. (3) An attitude defined by a healthy level of doubt. (4) See “Pyrrhonism.”
slave morality – A type of morality that is life-denying and views the world primarily in terms of evil. Slave morality tends to define “goodness” in terms of not being evil (i.e. not harming others).
slippery slope – (1) An argument that requires us to believe that incremental causal changes will likely happen given that we make certain decisions. For example, having violence on television might desensitize people to violence and lead to even greater violence on television in the future by an ever-increasing demand for more thrilling forms of entertainment. (2) An informal fallacy committed by arguments that require us to believe that some decision will likely lead to incremental changes for the worse without sufficient evidence for us to accept that the changes are likely to actually happen. For example, some people argue that we shouldn’t legalize same-sex marriage because that would likely lead to marriages between brothers and sisters, and eventually it would lead to marriages between humans and nonhuman animals.
slow track quasi-realism – An attempt to make sense out of moral language (such language involving moral facts, moral arguments, and moral truth) without endorsing moral realism by explaining how various particular moral statements can be coherent without moral realism. Even so, slow track quasi-realism does not require us to make the assertion that all moral language is perfectly consistent. “Slow track quasi-realism” is often contrasted with “fast track quasi-realism.” See “quasi-realism” for more information.
social construct – Something that exists because of collective attitudes and agreement. For example, money is a social construct that would not exist without people having certain collective attitudes.
social construction – The ability of collective attitudes and actions to create something. For example, our collective attitudes and actions create money, language, and the Presidency of the USA. These things would stop existing if we no longer believed in them.
social contract – The implicit mutual agreement or common acceptance of laws or ethical principles. A social contract does not have to be consciously agreed-upon. It’s an explanation for the legitimacy of laws or ethical principles insofar as we prefer to have them given the choice (and we could rationally agree to them).
social contract theory – (1) A theory that explains the legitimacy of laws or ethical principles in terms of a “social contract.” (2) The view that ethics originates from a social contract. Perhaps what’s morally right and wrong is based on the social contract (what people agree is right or wrong within their society).
social convention – See “convention.”
social darwinism – The view that the people should fight for survival through competition, such as through free-market capitalism. It is thought that those who do well in society (e.g. by making lots of money) are superior and deserve to do well, but that those who don’t do well deserve not to do well. Social darwinists believe that helping failing groups (e.g. poor people) will help keep those groups around when it would be better to let the groups die out.
social progress – For a culture to be improved through changes in political institutions, economic systems, education, technology, or some other cultural factor. Technological improvement is perhaps the least controversial form of social progress. Also see “sociocultural evolution” for more information.
social reality – Reality that exists because of the collective attitudes and actions of many people. For example, money, language, and the Presidency of the USA only exist because of the attitudes and actions of people. These things would stop existing if our attitudes and actions were changed in certain ways. See “institutional fact” for more information.
socialism – (1) An economy where the means of production (factories and natural resources) are publicly owned rather than privately owned. See “communism” for more information. (2) An economy that resembles communism to some degree, or that has more social programs than usual. This is also often called a “mixed system” (that has elements of both communism and capitalism).
sociocultural evolution – The view that people continue to find ways to adapt to their environment using technology, political systems, laws, improved education, and other cultural factors. See “social progress” for more information.
socratic dialectic – See “dialectic.”
soft atheism – To not believe in gods without believing gods don’t exist. Soft atheism is a form of indecision—to neither believe in gods nor to believe they don’t exist. “Soft atheism” is contrasted with “hard atheism.”
soft determinism – The view that the universe is deterministic and that people have free will. Soft determinsts are compatibilists, but not all compatibilists are soft determinists.
solipsism – The view that one’s mind is the only thing that exists. All other people and external objects are merely illusions or part of a dream.
sophism – A fallacious argument, generally used to manipulate or deceive others. Generally refers to “informal fallacies.”
sophist – (1) “Wise person.” (2) A rhetoric teacher from ancient Greece. Some of those teachers traveled to other countries, and questioned the taboos and cultural beliefs of the Greeks because those taboos and cultural beliefs were not shared by everyone in other countries. (3) Someone who is willing to use fallacious reasoning to manipulate the beliefs of other people. This sense of “sophist” is often contrasted with “philosopher.”
sophistry – Using nonrational argumentation that generally contain errors (flaws of reasoning). Sophistry generally refers to the manipulative use of “informal fallacies.”
sound argument – An argument that’s valid and has true premises. For example, consider the following sound argument—“If all dogs are mammals, then all dogs are animals. All dogs are mammals. Therefore, all dogs are animals.”
sound logical system – A logical system is sound if every statement it can prove using the axioms and rules of inference is a tautology (a logical truth). See “tautology” for more information.
soundness – See “sound argument” or “sound logical system.”
spatial parts – Physical parts of an object, such as molecules, hairs, or teeth. “Spatial parts” can be contrasted with “temporal parts.”
speech act – A communicative action using language. For example, commanding, requesting, or asking.
speicesism – Prejudice or bias against other species—humans would be speciesists for being prejudiced or biased against nonhuman animals. According to Peter Singer, speciesism refers to the bias of those who think that their own species are superior without any characteristic being the reason for that superiority (e.g. higher intelligence). If Singer is correct, then it could be morally right for one species to generally be treated better than other species, but it could also be morally right to treat some members of the species to be treated worse due to lacking certain positive characteristics.
spillover – A synonym for “externalities.”
spooky – Something is spooky if it is mysterious, supernatural, or other-worldly. We have a view of the world full of atoms and energy, and anything that isn’t explained by physical science is going to be regarded with a skeptical attitude by many philosophers.
spurious accuracy – A synonym for “overprecision.”
square of opposition – A visual aid used in logic to derive various logical relations between categories and quantifiers. For example, the square shows that if we can know that not all x are y, then we know that there is at least one x that is not a y. If we know that not all animals are dogs, then we know that there is at least one animal that is not a dog.
state-by-state dominance – A synonym for “statewise dominance.”
state of affairs – A situation or state of reality. For example, the state of affairs of dropping an object while standing on the Earth will lead to a state of affairs consisting of the object falling to the ground.
statement – Classically defined as a sentence that’s true or false. However, some philosophers argue that a statement could have some other truth value, such indeterminate (i.e. neither true nor false). For example, “this sentence is false” might be indeterminate.
statement letter – See “propositional letter.”
statement variable – See “propositional variable.”
statewise dominance – The property of a decision that can be said to be “superior” to another based on the decision-maker’s preferences and the fact that the outcomes of the decision are more likely to be preferable. Every possible outcome of a statewise dominant decision is just as preferable as the other except at least one outcome must be more preferable. See “stochastic dominance” for more information.
stipulative definition – A definition used to clarify what is meant by a term in a specific context. Stipulative definitions are often given to avoid the ambiguity or vagueness words have in common usage that would make communication more difficult. “Stipulative definitions” can be contrasted with “lexical definitions.”
stochastic – Regarding probability calculus. Stochastic systems have predictable and unpredictable elements that can be taken to be part of a probability distribution. For example, See “probability calculus” and “probability distribution” for more information.
stochastic dominance – The property of a decision that can be said to be “superior” based on the decision-maker’s preferences and the odds of leading to a preferable outcome. For example, a decision to eat food rather than starve to death has stochastic dominance assuming that the decision-maker would prefer to live and avoid pain.
stoicism – The philosophy of the Stoics. They thought virtue was the only good, that the virtuous are happy, that it’s virtuous to embrace reality rather than condemn it, that the universe is entirely physical, and that a pantheistic deity assures us that everything that happens is part of a divine plan.
straw man – A fallacious form of argument committed when we misrepresent another person’s arguments or beliefs in order to convince people that the arguments or beliefs are less reasonable than they really are. For example, Andrea might claim that “stealing is generally wrong,” and Charles might then reply, “No. Andrea wants us to believe that stealing is always wrong, but sometimes stealing might be necessary for survival.” Straw man arguments are not “charitable” to another person’s arguments and beliefs—to present them as rationally defensible as they really are.
strong A.I. (artificial intelligence) – A computer that has a mind of its own that is similar to the mind of a person.
strong argument – An inductive argument that is unlikely to have true premises and a false conclusion at the same time. Such an argument is thought to be a good reason to believe the conclusion to be true as long as we assume the premises are true. For example, “Half the people who had skin cancer over the last one-hundred years were given Drug X and their cancer was cured, and no one was cured who wasn’t given Drug X. Therefore, Drug X is likely a cure for skin cancer.” “Strong arguments” are sometimes contrasted with “cogent arguments.”
strong atheism – A synonym for “hard atheism.”
strong conclusion – An ambitious conclusion. Strong conclusions require more evidence than weak conclusions. For example, consider the following argument—“If objects fall whenever we drop them, then the best explanation is that invisible fairies move objects in a downward direction. Objects fall whenever we drop them. Therefore, the best explanation for is that invisible fairies move objects in a downward direction.” In this case the conclusion is too strong and we should present better evidence for it or not argue for it at all.
structuralism – (1) In philosophy of mathematics, structuralism refers to the view that the meaning of mathematical objects is exhausted by the place each object has within a mathematical system. For example, the number one can be defined as the natural number after zero. Structuralism is a form of “mathematical realism.” (2) In literary theory, structuralism refers to an attempt to introduce rational criteria for literary analysis. Structuralism also refers to the view that there is a formal meta-language that can help us understand all languages. (3) In philosophy of science, structuralism refers to the view that we should translate theories of physics into formal systems.
suberogatory – Actions or beliefs that are inferior to alternatives (or somewhat harmful), but are permissible. Suberogatory beliefs are compatible with rational requirements or normative epistemic constraints; and suberogatory actions are inferior to alternatives (or somewhat bad), but are compatible with moral requirements. For example, being rude is not generally serious enough to be “morally wrong,” but it is suberogatory. “Suberogatory” can be contrasted with “supererogatory.”
subject term – A synonym for “minor term.”
subjective certainty – A synonym for “psychological certainty.”
subjective idealism – The view that there is no material substance and that external reality exists only in our mind(s). For example, George Berkeley argued that only minds exist and that all of our experiences of the external world are caused by God. We all live in a shared dream world with predictable laws of nature.
subjective ought – What we ought to do with consideration of the knowledge of the person who will make a moral decision. What we subjectively ought to do is based on what is reasonable for us to do given our limited understanding of what will happen. For example, some utilitarians say we ought to do whatever we have reason to think will likely maximize happiness. We might say that a person who gives food to a charity is doing what she ought to do as long as it was very likely to help people and very unlikely to harm them, even if many of the people who eat the food have an unexpected allergic reaction. “Subjective ought” can be contrasted with “objective ought.”
subjective reason – See “agent-relative reason.”
subjective right and wrong – What is right or wrong with consideration of the knowledge of the person who will make a moral decision. Subjective right and wrong involves proper and improper moral reasoning. For example, we might say that a person ought not buy a lottery ticket because that person has no reason to expect to win, even if she really does buy one and win. We might say, “You won, but you had no reason to think you’d win. You just got lucky.” “Subjective right and wrong” can be contrasted with “objective right and wrong.”
subjectivism – A view that moral judgments refer to subjective states. For example, “stealing is wrong” would be true if the person who says it hates stealing, but it would be false if the person who states it likes stealing. Subjectivism has been criticized for being counterintuitive insofar as people who disagree about what’s morally right or wrong do not think they are merely stating their subjective states. When we give arguments for a moral position, we often think other people should agree with us because morality is “not just a matter of taste.” If subjectivism is true, then moral disagreement would be impossible, and moral justification would plausibly be irrational.
subjectivity – See “ontological subjectivity” and “epistemic subjectivity.”
subsentential – Something relating to parts of sentences, such as predicate logic.
subsentential logic – A synonym for “predicate logic.”
substance – (1) A type of foundational domain of reality. For example, materialists think matter is the only substance, and dualists think that both matter and mind are substances. (2) The most basic kinds of stuff that don’t require anything else to exist. For example, According to Rene Descartes, there are two different substances: matter and mind.
substance dualism – See “dualism.”
substantive – Non-tautological. For example, saying that rocks fall because of gravity is a substantive claim about reality. See “tautology” for more information.
sufficient condition – A condition that assures us that something else will happen or exist. For example, hitting a fly with a hammer is sufficient to kill the fly. Sufficient conditions can be contrasted with “necessary conditions.”
sufficient reason – (1) A justification that is good enough to make a conclusion reasonably believed. (2) A state of affairs that has a sufficient cause or explanation for existing and being the way it is.
sui generis – Latin for “of it’s own kind” or “unique in its characteristics.” A separate category that is different from all other categories. For example, some philosophers think minds are sui generis and can’t be reduced to brain activity. This is related to the concept of being “irreducible” because anything unique in this sense can’t be fully understood in terms of its parts.
summum bonum – Latin for “the supreme or highest good.”
super man – See “overman.”
supererogatory – Actions that are good or praiseworthy, but are not morally required. They are “above the call of duty.” “Supererogatory” actions can be contrasted with “obligatory” and “suberogatory” actions.
supervenience – Something supervenes on something else if underlying conditions perfectly correlate with it. A state of affairs (A) supervenes on another state of affairs (B) if any change in (A) requires a change in (B). The mind seems to supervene on the brain, and morality seems to supervene on physical and psychological facts. Any change in the mind seems to require a change in the brain, and any change in the moral status of a situation (what one ought to do) seems to require different circumstance involving physical and psychological facts.
supporting argument – A synonym for “positive argument.”
suppressed conclusion – A synonym for “unstated conclusion.”
suppressed premise – A synonym for “unstated premise.”
syllogism – (1) An argument consisting of two premises and a conclusion. (2) A synonym for “deductive argument.”
symbolic logic – A formal logical system devoid of content. Symbols are used to replace content and logical connectives. For example, “if all men are mortal, then Socrates is mortal” could be written as “A → B.” In this case “A” stands for “all men are mortal, “B” stands for “Socrates is mortal” and “→” stands for “implies.” See “formal logic” and “logical connectives” for more information.
syntactic completeness – A logical system is syntactically complete if and only if adding an unprovable axiom will produce at least one contradiction.
syntactical variable – A synonym for “metavariable.”
syntax – The arrangement of words or symbols. Syntax can involve rules and symbol manipulation. For example, “Like chocolate do I” would be an improper way to say “I like chocolate” in the English language. Logical form has syntax, but lacks semantics. “Syntax” is often contrasted with “semantics.”
synthesis – (1) A combination of things that become something new. For example, the synthesis of copper and tin creates bronze. (2) In dialectic, it refers to a new thesis (hypothesis or mode of being) proposed to avoid the pitfalls of the old thesis. It is considered to be a “synthesis” as long as the new thesis has similarities to the old thesis because it’s then a new and improved theory based on both the old thesis.
synthetic – Statements that cannot be true by definition. Instead, they can be true because of how they relate to something other than their meaning, such as how they relate to the world. For example, “humans are mammals” is synthetic and can be justified through empirical science. “Synthetic” is the opposite of “analytic.”
synthetic a priori – Statements that are not true by definition or entirely justified by observation. David Hume seemed to overlook this category when he divided all knowledge into matters of facts and relations of ideas. Immanel Kant coined this term and thought that “space and time exists” is an example of a synthetic a priori statement insofar as we can know that human beings require concepts of space and time in order to observe anything. However, Kant did not think we could know if space and time refers to anything other than as something that’s part of our experience.
tabula rasa – Latin for “blank slate.” Refers to the hypothesis that people were born not knowing anything.
tacit knowledge – Knowledge that is not stated or consciously understood, but is unconsciously understood, intuitively held, or implied by one’s other knowledge. Tacit knowledge is often attained or held without knowledge of the person who has it. “Tacit knowledge” can be contrasted with “explicit knowledge.”
tautology – (1) A statement with a logical form that guarantees that it is true. The statement “Socrates was a man or he wasn’t a man” is true no matter what because it has the logical form “a or not-a.” (“a” is any proposition.) (2) A rule of replacement that has two forms: (a) “a” and “a and a” both mean the same thing. (b) “a” and “a and/or a” both mean the same thing. (“a” stands for any propositions) For example, “Socrates is a man” means the same thing as “Socrates is either a man or a man.”
technê – Greek for “know-how, craft, or skill.”
teleology – A system or view that’s goal-oriented. Aristotelian teleology is the view that things in nature have a purpose and that they are good if they achieve their purpose well. Utilitarianism is also considered by many to be “teleological” because it posits that maximizing happiness is the appropriate goal for people.
temporal interpretation of modality – The view that “necessity” and “possibility” are based on time. For something to be necessary is for it to be true of all times, and for something to be possible is for it to be true at some time. It’s necessary that dogs are mammals because dogs are mammals at all times, and it’s possible for it to rain because there is at least one time that it rains. See “modality” and “truth conditions” for more information. The “temporal interpretation of modality” can be contrasted with the “world’s interpretation of modality.”
temporal parts – Time-dependent parts of a persisting thing often thought of as time-slices. The view that persisting things have temporal parts is based on the assumption that a persisting thing only exists in part at any given time-slice. We can talk about the temporal parts of a person in terms of the person yesterday, the person today, and the person tomorrow; and the person is thought to only exist in her entirety given every moment of her existence. We can talk about a person in any given time slice (such as August 3, 10:30 am), but that is not the entirety of the person. One reason some philosophers believe in temporal parts is because it can explain how an object can have two conflicting properties, such as how a single apple can be both green (while growing) and red (when ripe). If it has temporal parts, then we can say it is green in an earlier time-slice, and red in a later time-slice.
temporal modality – What makes a proposition true or false based on whether it is being applied to the past, present, or future. For example, dinosaurs existed in the past, but they do not presently exist.
term: See “terminology.”
terminology – (1) A word or phrase used to refer to something. For example, “critical thinking” is a term consisting of two words used to refer to a single concept. (2) A collection of words or phrases used to refer to something. For example, the technical concepts philosophers discuss involves a lot of terminology.
testability – The property of a hypothesis or theory that makes it possible to produce experiments that can reliably provide counterevidence against the theory. It can be said that something is testable in this sense if (a) there are certain events that could occur that would be incompatible with the theory and (b) the incompatible events could be produced in repeatable experiments.
testimonial evidence – The experience of a person used as evidence for something. Testimonial evidence is often fallacious, but it can count for something and be used when we use inductive reasoning. For example, if hundreds of people all find that a drug effectively works for them and no one found that the drug was ineffective, then that would be evidence that the drug is really effective. See “anecdotal evidence.”
theism – The view that one or more personal gods exist.
theocracy – A government dominated or ruled by a theistic religious group. The rulers of theocracies claim to know the mind and will of one or more god to legitimize their power.
theological noncognitivism – The view that judgments concerning gods are neither true nor false. Theological noncognitivists might think that there is no meaningful concept of “gods.” In that case statements about gods would be nonsense. For example, some philosophers have argued that it’s impossible to prove gods exist and that only theories we can prove to be true are meaningful. (See “verificationism” for more information.)
theology – The systematic study of gods. Also see “natural theology” and “revealed theology.”
theorem – A statement that we can know is true because of the axioms and rules of inference of a logical system. For example, consider a logical system with “a or not-a” as an axiom and the rule of inference that “a implies a or b.” The following is a proof of that system—“A or not-A. Therefore, A or not-A, or B.” In this case “A or not-A, or B” is a theorem. See “derivation,” “axioms,” “rules of inference,” and “logical system” for more information.
theoretical knowledge – See “theoretical wisdom.”
theoretical wisdom – The attainment of knowledge concerning the nature of reality and logic. Aristotle contrasts “theoretical wisdom” with “practical wisdom.”
theoretical virtues – The positive characteristics that help justify hypotheses or theories, such as simplicity and comprehensiveness. Theories that have greater theoretical virtues than alternatives are “more justified” than the alternatives.
theory – A comprehensive explanation for various phenomena. A hypothesis is not necessarily different from a “theory,” but the term ‘theory’ tends to be used to refer to hypotheses that have been systematically defended and tested without facing strong counter-evidence. In science, theories are taken as being our best explanations that should be believed and relied upon for practical everyday life. However, philosophers are often unable to say when a philosophical theory is the most justified and generally don’t insist that a philosophical theory should be accepted by everyone.
theory-laden observation – Observations are theory-laden when they are influenced by assumptions or interpretation. For example, visual experience is a collection of color blotches, but we interpret the experience as a world extended in space and time.
thesis – (1) An argumentative essay. For example, a doctoral thesis. (2) The conclusions made within an argumentative essay. For example, Henry David Thoreau concluded that people should stop paying their taxes in certain situations in his essay “Civil Disobedience.” (3) The claim or solution made within a dialectic. For example, capitalism could be considered to be a thesis used to live our lives with greater freedom, according to a Hegelian dialectic. See “dialectic” for more information.
thick concepts – Concepts that are fleshed out and involve a detailed understanding, such as deception and the veil of ignorance. These concepts are not as indefensible to us as general concepts that are less fleshed out, such as belief and desire. “Thick concepts” can be contrasted with “thin concepts.”
thin concepts – Concepts that are very general and perhaps even indispensable, such as right, wrong, belief, and desire. These concepts relate to our experiences and can be explained in further detail by competing interpretations or theories. “Thin concepts” can be contrasted with “thick concepts.”
thing in itself – Reality or a part of reality as it actually exists apart from flawed interpretation or perception.
third-person point of view – What it’s like to understand or experience the behavior and thoughts of other people as an observer. For example, to view another person eating breakfast is done from the third-person point of view. The “third-person point of view” is often contrasted with the “first-person point of view.”
token – (1) A particular concrete instance or manifestation. For example, some philosophers argued that every token mental event (such as a particular pain) is identical to a token physical event (such as something happening in the brain), but pain in general is not necessarily caused by any generalized type of physical event. For example, the same experience of pain might also be identical to some mechanical activity of a sentient robot. “Tokens” are often contrasted with “types.” (2) A symbolic object or action. For example, coins can be given out to be used as money at a carnival. (3) To have members of a different group just to give the impression of inclusiveness. For example, to have a black man play the part of an expendable character in a horror movie. The black man is often the first of the main characters to die in a horror movie.
totalitarianism – A political system where people have little to no freedom, and the government micromanages many details of the lives of citizens.
thought experiment – An imaginary situation used to clarify a hypothesis or support a particular belief. For example, someone might say, “Hurting people is never wrong.” Another person might reply with a thought experiment—“Imagine someone decided to beat you up just because you made her angry. Don’t you think that would be wrong?”
trans-world identity – For something to exist in multiple worlds. Some philosophers talk about there being multiple possible worlds. For example, they might say that it was possible for George Washington to become the King of the United States because there’s a possible world where he became the king. In this case we could say that George Washington has trans-world identity because he exists in multiple possible worlds. “Trans-world identity” can be contrasted to “world-bound individuals.” See “modality” and “possible world” for more information.
transcendence – (1) The property of being beyond or outside. (2) Being beyond and independent of the physical universe. Some people believe God is transcendent. “Transcendence” of this type can be contrasted with “immanence.”
transcendental apperception – According to Immanuel Kant, this is what makes experience possible. It is the unity and identity of the mind—our ability to have a single point of view or field of experience. Without a transcendental appreception, our experiences would be free-floating, lack continuity, and lack unification. “Transcendental apperception” can be contrasted with “empirical apperception.”
transcendental argument – An argument concerning what is required (or might be required) for a state of affairs. Transcendental arguments are often arguments made involving the necessary condition for the possibility of something else. For example, visual experience seems to require the assumption that an external world exists that can be seen (or we would only see blotches of colors).
transcendentalism – A literary, political, and philosophical field that was based on the Unitarian church. Transcendentalists often criticized conformity, criticized slavery, and encouraged solitude in the wild outdoors.
translation – (1) The restatement of a statement in natural language to a statement of formal logic. For example, “either George Washington was a dog or a mammal” can be translated into propositional logic as “P ∨ Q,” where “P” stands for “George Washington was a dog” and “Q” stands for “George Washington was a mammal” and “∨” is a logical connective meaning “and/or.” See “logical form” for more information. (2) In ordinary language, translation refers to a restatement of a sentence (or set of sentences). For example, the sentence “Il pleut” can be translated from French to English as “It’s raining.”
transparency – (1) The property of an epistemic state that guarantees that we know when the epistemic state exists. Weakly transparent epistemic states guarantee that we know we have them, and strongly transparent epistemic states guarantee that we know when we have them and when we don’t. For example, pain is a plausible example of a strongly transparent epistemic state. (2) The property of being in the open without pretense. (3) The property of being see-through. For example, glass windows are usually see-through.
transposition – A rule of replacement that states that “if a, then b” means the same thing as “if not-b, then not-a.” (“a” and “b” stand for any two propositions.) For example, “if Socrates is a dog, then Socrates is a mammal” means the same thing as “if Socrates is not a mammal, then Socrates is not a dog.”
Trigger’s Broom – A broom used in a thought experiment in which all the parts of the broom have been replaced. This encourages us to ask the question, “Is it still the same broom?”
true – The property that some propositions have that makes them based on reality. According to Aristotle, a statement is true if it corresponds with reality. For example, “Socrates was a man” is true. However, there might be other uses of the word ‘true,’ such as, “it is true that the pawn can move two spaces forward when it is first moved in a game of Chess.” Many such “truths” are based on agreements or human constructions and are not factual in the usual sense of the word. See “correspondence theory of truth” and “deflationary theory of truth” for more information. “True” is the opposite of “false.”
truth conditions – The conditions that make a statement true. For example, the truth condition of “the cat is on the mat” is a cat on a mat. The statement is true if and only if a cat is on a mat. Sometimes truth conditions are controversial, such as when we say it’s “necessary that people are rational animals.” It could be true if and only if people are rational animals in all times, or in all possible worlds, or perhaps given some other condition.
truth-functional – Complex propositions using logical connectives that can be determined to be true or false based on the truth-values of the simple propositions involved. For example, “dogs are mammals, and cats aren’t reptiles” is a complex proposition that contains two simple propositions: (a) “Dogs are mammals” and (b) “cats aren’t reptiles.” In this case we can determine the truth by knowing the truth of the simple propositions. Both propositions are true and form a single conjunction, so the complex proposition is also true.
truth preservation – The property of reasoning that can’t have true premises and false conclusions. See “valid” for more information.
truth table – A visual display used in formal logic that has columns and rows that are used to determine which propositions are true or false, what arguments are valid, what propositions are equivalent, etc. For example, the truth table for “p and/or q” is used to show that it’s true unless both p and q are false at the same time. (“George Washington is either a person and/or a dog” because at least one of those options are true.) This truth table looks like the following:
truth tree – A visual-oriented method used in formal logic to determine if a set of propositions are contradictory or consistent; or if an argument is valid or invalid; etc. For example, consider the argument form “if P, then Q; if Q, then R; therefore, if P, then R.” The following truth tree proves this argument form to be valid:
truth-values – Values involving the accuracy or factual nature of a proposition, statement, or belief. For example, true and false. There could be others, but those are the only two non-controversial truth-values.
tu quoque – Latin for “you too.” Often refers to a type of fallacious argument that implies that someone’s argument should be dismissed because the person who made the argument is a hypocrite. For example, a smoker might argue that cigarettes are unhealthy and someone else might reply, “But you smoke, so smoking is obviously not unhealthy!” See “ad hominem” for more information.
Turing machine – A machine that follows rules that cause it to make certain motions based on symbols. Turing machines are capable of using finite formal systems of logic and mathematics. Turing machines were originally hypothetical devices, but computers could be considered to be a type of Turing machine.
Turing test – A test used to examine the ability of a machine to speak natural language within a conversation. Machines that speak natural language within conversations in exactly the same way as real human beings pass the Turing test. Any machine that passes the Turing test could be said to adequately simulate human behavior in regards to its ability to simulate a conversation in natural language. There could be tests similar to the Turing test that tests a machine’s ability to simulate other types of human behavior.
Twin Earth – A hypothetical world or planet exactly like ours in almost every respect. For example, Hilary Putnam invented this concept to introduce a world with a substance exactly like water that has a different chemical composition. See “quater” and “possible world” for more information.
type – A kind of thing or a general category. For example, some philosophers argued that every mental event type (such as pain) is identical to a physical event type (such as brainwaves). This would imply that the same experience of pain could never be identical to some mechanical activity of a sentient robot—it would therefore be impossible to have a robot with the same thoughts and feelings as a human being. “Types” are sometimes contrasted with “tokens.”
Übermensch – German for “overman.”
underdetermination – The status of having insufficient evidence to know what we should believe. For example, taking a pill and being cured of an illness shortly afterward could be evidence that the pill cured the illness, but it’s also possible that the person would be cured of the illness regardless of taking the pill. Fallacies related to underdetermination include “hasty generalization,” “cum hoc ergo propter hoc” and “anecdotal evidence.”
undistributed middle – A fallacious categorical syllogism committed when the middle term is neither distributed in the major premise nor the minor premise. For example, “All dogs are mammals. All animals are mammals. Therefore, no dogs are animals.” See “distribution” and “middle term” for more information.
universal – (1) What’s true for everyone or everything relevant. For example, it’s a universal fact of humans that they are all mammals. (2) A concept can be said to be “a universal” when it refers to a type of thing (e.g. goodness). There were realists who thought universals existed apart from our generalizations, conceptualists who thought universals existed only in the mind, and nominalists who thought that universals were merely convenient “names” we give to describe various particular objects. The opposite of a “universal” is a “particular.”
universal quantifier – A term or symbol used to say something about an entire class. For example, “all” and “every” are universal quantifiers used in ordinary language. “All horses are mammals” means that if a horse exist, then it is a mammal.” This statement does not say that any horses actually exist. The universal quantifier in symbolic logic is “∀.” See “quantifier” for more information.
Universal Reason – The mental or intelligent element of the universe conceived as a deity by the Stoics. The Stoics saw the entire universe as a god—matter is the body and Universal Reason is the mind. They believed that Universal Reason has a divine plan and determines that everything that happens in the universe happens for a good reason.
universal truth – A truth that always applies, such as the truth of the law of non-contradiction.
universalizability – Something applicable to everyone. Immanuel Kant thinks that moral principles must be universalizable in that everyone ought only act for a reason that one could will for someone else who is in the same situation. Universalizability seems necessary to avoid hypocrisy. For example, it would be wrong for us attack another person just because she makes us angry because it would be wrong for other people to attack us just because we make them angry.
univocal – Something that completely lacks ambiguity and only has one possible meaning. “1+1=2” is a plausible example. “Univocal” is often contrasted with “equivocal.”
unstated assumption – An assumption of an argument that is not explicitly stated, but is implied or required for the argument to be rationally persuasive. The assumption could be a premise or conclusion. For example, consider the argument “the death penalty is immoral because it kills people.” This argument requires a hidden assumption—perhaps that “it’s always immoral to kill people.”
unstated conclusion – A conclusion of an argument that is not explicitly stated, but is implied. For example, consider the argument “the death penalty kills people and it’s immoral to kill people.” This argument implies the unstated conclusion—that the death penalty is immoral.
unstated premise – A premise of an argument that is not explicitly stated, but is implied or required for the argument to be rationally persuasive. For example, consider the argument “All humans are mortal because they’re mammals.” This argument requires an unstated premise—that “all mammals are mortal.”
useful fictions – Nonfactual concepts used for thought experiments or philosophical theories. For example, social contracts, the veil of ignorance, quater, grue, or the impartial spectator. These fictions can illuminate various philosophical issues or intuitions. Sometimes they present simplified situations to isolate important distinctions. For example, the concept of quater is of a chemical functionally equivalent to water that’s just like it in every single functional way, but it’s not H2O. Quater illuminates the intuition that the word ‘water’ does not merely refer to a chemical with certain functions because the chemical composition is also important.
usefulness – The importance of something for attaining a goal. See “instrumental value.”
utilitarianism – A moral theory that states that happiness or pleasure is the only thing with positive intrinsic value, and suffering or pain is the only thing with intrinsic disvalue. Right actions are determined by what action produces the greatest good compared to the alternatives, and all actions are wrong to the degree that they fail to produce the greatest good.
utility – (1) Relating to causing happiness. For example, the “principle of uility.” (2) The degree something causes pleasure or happiness, and reduces pain or suffering. An action that has the most utility causes more happiness (or pleasure) than the alternatives. See the “greatest happiness principle.” for more information. (3) The property of being useful or to leading to a preferable state of affairs. See “utility function” for more information.
utility function – How much an agent values the outcome of various decisions she has to choose from based on incomplete information concerning how the world is or will be. For example, a person might have to choose whether or not to wear sunscreen depending on how long she expects to spend in the sun while spending time at the beach. Perhaps if she spends at least three hours at the beach, then odds are that she will get a sunburn unless she wears sunscreen. However, wearing sunscreen could be a waste of time and resources if she only spends on hour at the beach.
utility theory – A view concerning how we should make decisions based on how much we value the outcomes of various decisions we could make. Decisions that are likely to lead to a desirable outcome could be considered to be “rational” and those that are unlikely to could be said to be “irrational.” For example, trying to eat food by going to sleep would generally be irrational because the odds of it leading to a desired outcome is low.
vagueness – Words and phrases are vague when it’s not clear where the boundaries are. For example, it’s not clear how many hairs can be on a person’s head for the person to be considered to be “bald.” Vagueness often makes it hard for us to know where to “draw the line.” “Vagueness” is often contrasted with “ambiguity.”
valid argument – An argument is valid when it has a logical form that assures us that true premises guarantee the truth of the conclusion. It is impossible for a valid argument to have true premises and a false conclusion at the same time. For example, consider the following valid argument—“If Socrates is a dog, then Socrates is a mammal. Socrates is a dog. Therefore, Socrates is a mammal.” The argumnt has the valid argument form “If A, then B; A; therefore, B.” “Valid” is the opposite of “invalid.” See “logical form” for more information.
valid formula – A statement that is true under all interpretations. For example, “A or not-A” is true no matter what proposition “A” stands for. If “A” stands for “nothing exists,” then the statement is “either nothing exists or it’s not the case that nothing exists.” In propositional logic, “valid formula” is a synonym for “tautology.”
valid logical system – A logical system that has valid rules of inference. If a logical system is valid, then it’s impossible for true premises to be used with the rules of inference to prove a false conclusion. See “rules of inference” for more information.
validity – (1) See “valid argument,” “valid formula,” or “valid logical system.” (2) Sometimes strong inductive arguments are said to be “inductively valid.” See “strong argument” for more inforamtion.
value – What we describe as good or bad. Positive value is also known as “goodness.” See “intrinsic value,” “extrinsic value,” “instrumental value,” and “inherent value.”
variable – (1) See “propositional variable” or “predicate variable.” (2) A symbol used to represent something else, or a symbol used to represent a range of possible things. For example, “x + 3 = y” has two variables that can represent a range of possible things (“x” and “y.”)
veil of ignorance – According to John Rawls, the best way to know which principles of justice are the most justified would require people to be in a position with full scientific knowledge but without knowing who they will be in society (rich, poor, women, men, etc.) We are to imagine that they would be risk-adverse and would not grant people of any particular group unfair advantages.
Venn Diagram – A visual representation of a categorical syllogism that is generally used as a tool to determine if it’s logically valid or invalid. For example, consider the argument form, “All A are B. All B are C. Therefore, All A are C.” The following Venn Diagram proves this argument form to be valid because A is shaded in everywhere other than where A overlaps with C (which is a representation of the conclusion):
verificationism – (1) The “verification theory of meaning.” The view that statements are meaningless unless it is possible to verify that they are true. For example, if we can’t prove that creationism is true, then it would be considered to be meaningless. (2) The “verification theory of justification.” The view that statements are unjustified unless we can somehow verify that they are true. For example, the statement “induction is reliable” would be said to be unjustified because it’s plausible that we can’t verify that it’s true. See the “problem of induction” for more information.
vicious circularity – An objectionable type of circular reasoning or argumentation. For example, “Socrates is a man because Socrates is a man” is a viciously circular argument. However, “coherentism” is the view that circular types of justification are not vicious as long as enough observations and/or beliefs are involved. Perhaps the more observations and beliefs are involved, the less vicious circular reasoning is. For example, an argument of the form “a because b, b because c, and c because a” might be less vicious argument with the form “a because b, and b because a.”
vicious regress – An objectionable infinite regress. Consider the view that beliefs are unjustified unless we justify them with an argument (or argument-like reasoning). In that case we will have to justify all our beliefs with arguments that also have premises that must also be justified via argumentation. This view requires that justified beliefs be justified via infinite arguments. This regress could be considered to be “vicious” insofar as the solution of a problem has the same problem it’s supposed to solve. However, “infinitism” is the view that this infinite regress is not vicious.
virtue – A positive characteristic, which is generally discussed in ethics. Courage, moderation, and wisdom are the three most commonly discussed virtues. Some people are also said to be “virtuous” for being “good people.” Virtues can describe traits that make something better. For example, we could talk about “theoretical virtues” that make some theories more justified than others, such as comprehensiveness. Another translation of “virtue” from ancient Greek philosophy is “excellence.”
virtue epistemology – Epistemic theories that (a) focus on normative considerations, such as values, norms, and/or virtues (i.e. positive characteristics) that are appropriately associated with being reasonable; and (b) judge people as the primary bearer of epistemic values (e.g. virtues, rationality, reasonableness, etc.). Virtue epistemologists often talk about “intellectual virtues”—positive characteristics of people that help them reason properly, such as being appropriately open-minded and skeptical. See “virtue responsibilism” and “virtue reliabilism” for more information.
virtue ethics – Ethical theories that primarily focus on virtues, vices, and the type of person we are rather than “right and wrong.” Courage, moderation, and wisdom are virtues that many virtue ethicists discuss.
virtue reliabilism – A type of “virtue epistemology” that views “intellectual virtues” in terms of faculties; such as perception, intuition, and memory.
virtue responsibilism – A type of “virtue epistemology” that views “intellectual virtues” in terms of traits, such as open-mindedness, skepticism, humility, and conscientiousness.
virtuous – Having virtues or exhibiting virtues close to an ideal. See “virtue” for more information.
vice – Negative traits, such as cowardice, addiction, and foolishness. Vices can describe a person’s character traits or objectionable traits that make something else worse. For example, we talk about “vicious circularity” and “vicious regresses.”
vicious – (1) Having vices or exhibiting virtues to a very low degree. See “vice” for more information. (2) Being evil, aggressive, or violent. (3) Severely unpleasant.
will to power – Friedrich Nietzsche’s speculative metaphysics and psychological views that he believes to be compatible with natural science, but using better metaphors. Will to power is an alternative to free will and an alternative to laws of nature. One interpretation of will to power is that instead of claiming that free will can cause our actions in an indeterministic way, we do whatever our internal driving force dictates; and instead of claiming that laws of nature force objects into motion, the internal driving force of each object dictates how it moves. Will to power relates to psychology in that the unifying driving force of people and nonhuman animals is to attain greater power (i.e. personal freedom, self-control, health, strength, and domination over others).
wisdom – The ability to have good judgment. Wisdom is sometimes used refer to a person’s level of virtue and/or theoretical knowledge. See “practical wisdom” and “theoretical wisdom” for more information.
weak analogy – A fallacy that’s committed when an argument concludes that something is true based on an analogy. For example, swords and smoking can both kill people, but we can’t use that similarity to conclude that cigarette smoke is made of metal (just like swords). That would be a weak analogy. Not all arguments using analogies are fallacious. See “argument from analogy” for more information.
weak argument – Inductive arguments that have conclusions that are too strong given the evidence. The conclusion is not sufficiently supported by the evidence. For example, “three people took a drug last week and didn’t get sick, so the drug probably prevents people from getting sick.” Arguments that are too weak commit the “hasty generalization” fallacy.
weak atheism – A synonym for “soft atheism.”
weak conclusion – A modest conclusion. Weak conclusions require less evidence than strong conclusions. For example, consider the following argument—“If a light goes on at my neighbor’s house, then the best explanation is that a person is in the house. The light went on at my neighbor’s house. Therefore, the best explanation is that a person is in the house.” In this case the conclusion is weak and we would be unreasonable to demand very strong evidence in its favor as a result.
weakness of will – A situation of doing what one knows or believes to be morally wrong (i.e. the wrong thing to do, all things considered). For example, a person might think that stealing one hundred dollars from a friend is the morally wrong thing to do, and do it anyway.
well-formed formula – A properly-stated proposition of formal logic. For example, “a or b” is a well-formed formula, but “a b or” is not. (“a or b” could stand for any either-or statement, such as “either something exists or nothing exists.”)
WFF – See “well-formed formula.”
working hypothesis – A hypothesis that is provisionally accepted and could be rejected when new evidence is presented.
world-bound individuals – For something to only exist in one world. Some philosophers talk about there being multiple possible worlds, but they think each person is world-bound insofar as they can only exist in one world. For example, they might say that it was possible for Thomas Jefferson to become the first President of the United States because there’s a possible world where the person in that world who most resembles Thomas Jefferson became the first President of the United States. That possible world does not contain the actual Thomas Jefferson in it. “Trans-world identity” can be contrasted with “world-bound individuals.” See “modality” and “possible world” for more information.
world of ideas – The realm of the Forms. See “Plato’s Forms” for more information.
worlds interpretation of modality – A view of “necessity” and “possibility” based on worlds. For something to be necessary is for it to be true of all worlds, and for something to be possible is for it to be true at some world. It’s necessary that dogs are mammals because dogs are mammals at all words, and it’s possible for it to rain because there’s at least one world where it rains. The “worlds interpretation of modality” can be contrasted with the “temporal interpretation of modality.” See “possible worlds” and “modality” for more information.
worldview – A comprehensive understanding of everything (and how everything relates). Two worldviews could theoretically be equally justified and there might be no way to know which worldview is more accurate. Worldviews are likely influenced by cultures and are often influenced by religions. Worldviews are thought to help us interpret our experiences and influence perception.
worm theory – See “perdurantism.”
wrong – (1) Incorrect or inappropriate as opposed to “right.” For example, people who believe the Earth is flat are wrong. (2) “Morally wrong” as opposed to “morally right.” For example, murder is morally wrong.
youthism – Prejudice against younger people. A form of “ageism.” For example, the view that older people are generally more qualified for a job.
zeitgeist – German for “the spirit of an age.” It’s often used to refer to the biases, expectations, and assumptions of a group of people. Compare “zeitgeist” with “worldview.”
zombie – (1) Something that appears to be a human being or person that behaves exactly as we would expect a thinking person to behave, but actually has no psychological experiences or thoughts whatsoever. For example, a zombie could say, “I love coffee” but can neither taste coffee nor think it loves it. (2) In ordinary language, “zombie” refers to an undead person or walking corpse that either has no mind of its own or has an irresistible impulse to try to eat people. | http://ethicalrealism.wordpress.com/philosophy-dictionary-glossary/ | 13 |
65 | In physics and nuclear chemistry, nuclear fusion is the process by which multiple atomic particles join together to form a heavier nucleus. It is accompanied by the release or absorption of energy. Iron and nickel nuclei have the largest binding energies per nucleon of all nuclei and therefore are the most stable. The fusion of two nuclei lighter than iron or nickel generally releases energy, while the fusion of nuclei heavier than iron or nickel absorbs energy. The opposite is true for nuclear fission. Nuclear fusion is naturally found in stars.
Fusion reactions power the stars and produce all but the lightest elements in a process called nucleosynthesis. Whereas the fusion of light elements in the stars releases energy, production of the heaviest elements absorbs energy, so it can only take place in the extremely high-energy conditions of supernova explosions.
When the fusion reaction is a sustained uncontrolled chain, it can result in a thermonuclear explosion, such as what is generated by a hydrogen bomb. Reactions that are not self-sustaining can still release considerable energy, as well as large numbers of neutrons.
Research into controlled fusion, with the aim of producing fusion power for the production of electricity, has been conducted for over 50 years. It has been accompanied by extreme scientific and technological difficulties, and as of yet has not been successful in producing workable designs. As of the present, the only self-sustaining fusion reactions produced by humans have been produced in hydrogen bombs, where the extreme power of a fission bomb is necessary to begin the process. While some plans have been put forth to attempt to use the explosions of hydrogen bombs to generate electricity (e.g. PACER), none of these have ever moved far past the design stage.
It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. This is because all nuclei have a positive charge (due to their protons), and as like charges repel, nuclei strongly resist being put too close together. Accelerated to high speeds (that is, heated to thermonuclear temperatures), however, they can overcome this electromagnetic repulsion and get close enough for the strong nuclear force to be active, achieving fusion. The fusion of lighter nuclei, creating a heavier nucleus and a free neutron, will generally release more energy than it took to force them together—an exothermic process that can produce self-sustaining reactions.
The energy released in most nuclear reactions is much larger than that in chemical reactions, because the binding energy that holds a nucleus together is far greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is 13.6 electron volts—less than one-millionth of the 17 MeV released in the D-T (deuterium-tritium) reaction shown to the top right. Fusion reactions have an energy density many times greater than nuclear fission—that is, per unit of mass the reactions produce far greater energies, even though individual fission reactions are generally much more energetic than individual fusion reactions—which are themselves millions of times more energetic than chemical reactions. Only the direct conversion of mass into energy, such as with collision of matter and antimatter, is more energetic per unit of mass than nuclear fusion.
Building upon the nuclear transmutation experiments of Ernest Rutherford done a few years earlier, fusion of light nuclei (hydrogen isotopes) was first observed by Mark Oliphant in 1932, and the steps of the main cycle of nuclear fusion in stars were subsequently worked out by Hans Bethe throughout the remainder of that decade. Research into fusion for military purposes began in the early 1940s, as part of the Manhattan Project, but was not successful until 1952. Research into controlled fusion for civilian purposes began in the 1950s, and continues to this day.
A substantial energy barrier must be overcome before fusion can occur. At large distances two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the nuclear force which is stronger at close distances.
When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to other nucleons, but primarily to its immediate neighbors due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface area-to-volume ratio, the binding energy per nucleon due to the strong force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a fully surrounded nucleon.
The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei get larger.
The net result of these opposing forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei are not stable. The four most tightly bound nuclei, in decreasing order of binding energy, are 62Ni, 58Fe, 56Fe, and 60Ni. Even though the nickel isotope]] 62Ni is more stable, the iron isotope 56Fe is an order of magnitude more common. This is due to a greater disintegration rate for 62Ni in the interior of stars driven by photon absorption.
A notable exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the next heavier element. The Pauli exclusion principle provides an explanation for this exceptional behavior—it says that because protons and neutrons are fermions, they cannot exist in exactly the same state. Each proton or neutron energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons; so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states.
The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come in contact can the strong nuclear force take over. Consequently, even when the final energy state is lower, there is a large energy barrier that must first be overcome. It is called the Coulomb barrier.
The Coulomb barrier is smallest for isotopes of hydrogen—they contain only a single positive charge in the nucleus. A bi-proton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products.
Using deuterium-tritium fuel, the resulting energy barrier is about 0.01 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV, about 750 times less energy. The (intermediate) result of the fusion is an unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier.
If the energy to initiate the reaction comes from accelerating one of the nuclei, the process is called beam-target fusion; if both nuclei are accelerated, it is beam-beam fusion. If the nuclei are part of a plasma near thermal equilibrium, one speaks of thermonuclear fusion. Temperature is a measure of the average kinetic energy of particles, so by heating the nuclei they will gain energy and eventually have enough to overcome this 0.01 MeV. Converting the units between electron-volts and Kelvin shows that the barrier would be overcome at a temperature in excess of 120 million Kelvin—a very high temperature.
There are two effects that lower the actual temperature needed. One is the fact that temperature is the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy than 0.01 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunneling. The nuclei do not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough energy, they can tunnel through the remaining barrier. For this reason fuel at lower temperatures will still undergo fusion events at a lower rate.
The reaction cross section σ is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution with thermonuclear fusion, then it is useful to perform an average over the distributions of the product of cross section and velocity. The reaction rate (fusions per volume per time) is <σv> times the product of the reactant number densities:
If a species of nuclei is reacting with itself, such as the DD reaction, then the product n1n2 must be replaced by (1 / 2)n2.
increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of <σv> as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion.
Fuel confinement methods
One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity. The mass needed, however, is so great that gravitational confinement is only found in stars (the smallest of which are brown dwarfs). Even if the more reactive fuel deuterium were used, a mass greater than that of the planet Jupiter would be needed.
Since plasmas are very good electrical conductors, magnetic fields can also confine fusion fuel. A variety of magnetic configurations can be used, the most basic distinction being between mirror confinement and toroidal confinement, especially tokamaks and stellarators.
A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion fuel, causing it to simultaneously "implode" and heat to very high pressure and temperature. If the fuel is dense enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed. Inertial confinement is used in the hydrogen bomb, where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser, ion, or electron beam, or a Z-pinch.
Some other confinement principles have been investigated, such as muon-catalyzed fusion, the Farnsworth-Hirsch fusor and Polywell (inertial electrostatic confinement), and bubble fusion.
A variety of methods are known to effect nuclear fusion. Some are "cold" in the strict sense that no part of the material is hot (except for the reaction products), some are "cold" in the limited sense that the bulk of the material is at a relatively low temperature and pressure but the reactants are not, and some are "hot" fusion methods that create macroscopic regions of very high temperature and pressure.
Locally cold fusion
- Muon-catalyzed fusion is a well-established and reproducible fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. It has not been reported to produce net energy. Net energy production from this reaction is not believed to be possible because of the energy required to create muons, their 2.2 µs half-life, and the chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion.
Generally cold, locally hot fusion
- Accelerator based light-ion fusion. Using particle accelerators it is possible to achieve particle kinetic energies sufficient to induce many light ion fusion reactions. Of particular relevance into this discussion are devices referred to as sealed-tube neutron generators. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement which allows ions of these nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves. Despite periodic reports in the popular press by scientists claiming to have invented "table-top" fusion machines, neutron generators have been around for half a century. The sizes of these devices vary but the smallest instruments are often packaged in sizes smaller than a loaf of bread. These devices do not produce a net power output.
- In sonoluminescence, acoustic shock waves create temporary bubbles that collapse shortly after creation, producing very high temperatures and pressures. In 2002, Rusi P. Taleyarkhan reported the possibility that bubble fusion occurs in those collapsing bubbles (sonofusion). As of 2005, experiments to determine whether fusion is occurring give conflicting results. If fusion is occurring, it is because the local temperature and pressure are sufficiently high to produce hot fusion.
- The Farnsworth-Hirsch Fusor is a tabletop device in which fusion occurs. This fusion comes from high effective temperatures produced by electrostatic acceleration of ions. The device can be built inexpensively, but it too is unable to produce a net power output.
- Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been studied primarily in the context of making nuclear pulse propulsion feasible. This is not near becoming a practical power source, due to the cost of manufacturing antimatter alone.
- Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−30 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. Though the energy of the deuterium ions generated by the crystal has not been directly measured, the authors used 100 keV (a temperature of about 109 K) as an estimate in their modeling. At these energy levels, two deuterium nuclei can fuse together to produce a helium-3 nucleus, a 2.45 MeV neutron and bremsstrahlung. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces.
- "Standard" "hot" fusion, in which the fuel reaches tremendous temperature and pressure inside a fusion reactor or nuclear weapon.
The methods in the second group are examples of non-equilibrium systems, in which very high temperatures and pressures are produced in a relatively small region adjacent to material of much lower temperature. In his doctoral thesis for MIT, Todd Rider did a theoretical study of all quasineutral, isotropic, non-equilibrium fusion systems. He demonstrated that all such systems will leak energy at a rapid rate due to bremsstrahlung, radiation produced when electrons in the plasma hit other electrons or ions at a cooler temperature and suddenly decelerate. The problem is not as pronounced in a hot plasma because the range of temperatures, and thus the magnitude of the deceleration, is much lower. Note that Rider's work does not apply to non-neutral and/or anisotropic non-equilibrium plasmas.
Astrophysical reaction chains
The most important fusion process in nature is that which powers the stars. The net result is the fusion of four protons into one alpha particle, with the release of two positrons, two neutrinos (which changes two of the protons into neutrons), and energy, but several individual reactions are involved, depending on the mass of the star. For stars the size of the sun or smaller, the proton-proton chain dominates. In heavier stars, the CNO cycle is more important. Both types of processes are responsible for the creation of new elements as part of stellar nucleosynthesis.
At the temperatures and densities in stellar cores the rates of fusion reactions are notoriously slow. For example, at solar core temperature (T ~ 15 MK) and density (~120 g/cm3), the energy release rate is only ~0.1 microwatt/cm3—millions of times less than the rate of energy release of ordinary candela and thousands of times less than the rate at which a human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates strongly depend on temperature (~exp(-E/kT)), then in order to achieve reasonable rates of energy production in terrestrial fusion reactors 10–100 times higher temperatures (compared to stellar interiors) are required T~0.1–1.0 GK.
Criteria and candidates for terrestrial reactions
In man-made fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. This implies a lower Lawson criterion, and therefore less startup effort. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic.
In order to be useful as a source of energy, a fusion reaction must satisfy several criteria. It must
- be exothermic: This may be obvious, but it limits the reactants to the low Z (number of protons) side of the curve of binding energy. It also makes helium-4 the most common product because of its extraordinarily tight binding, although He-3 and H-3 also show up;
- involve low Z nuclei: This is because the electrostatic repulsion must be overcome before the nuclei are close enough to fuse;
- have two reactants: At anything less than stellar densities, three body collisions are too improbable. It should be noted that in inertial confinement, both stellar densities and temperatures are exceeded to compensate for the shortcomings of the third parameter of the Lawson criterion, ICF's very short confinement time;
- have two or more products: This allows simultaneous conservation of energy and momentum without relying on the electromagnetic force;
- conserve both protons and neutrons: The cross sections for the weak interaction are too small.
Few reactions meet these criteria. The following are those with the largest cross sections:
|(1)||D||+||T||→||4He||(3.5 MeV)||+||n||(14.1 MeV)|
|(2i)||D||+||D||→||T||(1.01 MeV)||+||p||(3.02 MeV)||50%|
|(2ii)||→||3He||(0.82 MeV)||+||n||(2.45 MeV)||50%|
|(3)||D||+||3He||→||4He||(3.6 MeV)||+||p||(14.7 MeV)|
|(4)||T||+||T||→||4He||+||2||n||+ 11.3 MeV|
|(5)||3He||+||3He||→||4He||+||2||p||+ 12.9 MeV|
|(6i)||3He||+||T||→||4He||+||p||+||n||+ 12.1 MeV||51%|
|(6ii)||→||4He||(4.8 MeV)||+||D||(9.5 MeV)||43%|
|(6iii)||→||4He||(0.5 MeV)||+||n||(1.9 MeV)||+||p||(11.9 MeV)||6%|
|(7i)||D||+||6Li||→||2||4He||+ 22.4 MeV||__%|
|(7ii)||→||3He||+||4He||+||n||+ 2.56 MeV||__%|
|(7iii)||→||7Li||+||p||+ 5.0 MeV||__%|
|(7iv)||→||7Be||+||n||+ 3.4 MeV||__%|
|(8)||p||+||6Li||→||4He||(1.7 MeV)||+||3He||(2.3 MeV)|
|(9)||3He||+||6Li||→||2||4He||+||p||+ 16.9 MeV|
For reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given.
Some reaction candidates can be eliminated at once. The D-6Li reaction has no advantage compared to p-11B because it is roughly as difficult to burn but produces substantially more neutrons through D-D side reactions. There is also a p-7Li reaction, but the cross section is far too low, except possibly when Ti > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p-9Be reaction, which is not only difficult to burn, but 9Be can be easily induced to split into two alphas and a neutron.
In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium in "dry" fusion bombs and some proposed fusion reactors:
- n + 6Li → T + 4He
- n + 7Li → T + 4He + n
To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the cross section. Any given fusion device will have a maximum plasma pressure that it can sustain, and an economical device will always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that <σv>/T² is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum (a plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating). This optimum temperature and the value of <σv>/T² at that temperature is given for a few of these reactions in the following table.
|fuel||T [keV]||<σv>/T² [m³/s/keV²]|
Note that many of the reactions form chains. For instance, a reactor fueled with T and 3He will create some D, which is then possible to use in the D + 3He reaction if the energies are "right." An elegant idea is to combine the reactions (8) and (9). The 3He from reaction (8) can react with 6Li in reaction (9) before completely thermalizing. This produces an energetic proton which in turn undergoes reaction (8) before thermalizing. A detailed analysis shows that this idea will not really work well, but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate.
Neutronicity, confinement requirement, and power density
Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products Efus, the energy of the charged fusion products Ech, and the atomic number Z of the non-hydrogenic reactant.
Specification of the D-D reaction entails some difficulties, though. To begin with, one must average over the two branches (2) and (3). More difficult is to decide how to treat the T and 3He products. T burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The D-3He reaction is optimized at a much higher temperature, so the burnup at the optimum D-D temperature may be low, so it seems reasonable to assume the T but not the 3He gets burned up and adds its energy to the net reaction. Thus we will count the DD fusion energy as Efus = (4.03+17.6+3.27)/2 = 12.5 MeV and the energy in charged particles as Ech = (4.03+3.5+0.82)/2 = 4.2 MeV.
Another unique aspect of the D-D reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate.
With this choice, we tabulate parameters for four of the most important reactions.
|fuel||Z||Efus [MeV]||Ech [MeV]||neutronicity|
The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as (Efus-Ech)/Efus. For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium.
Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor 2/(Z+1). Therefore the rate for these reactions is reduced by the same factor, on top of any differences in the values of <σv>/T². On the other hand, because the D-D reaction has only one reactant, the rate is twice as high as if the fuel were divided between two hydrogenic species.
Thus there is a "penalty" of (2/(Z+1)) for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however, discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot ion mode," the "penalty" would not apply. There is at the same time a "bonus" of a factor 2 for D-D due to the fact that each ion can react with any of the other ions, not just a fraction of them.
We can now compare these reactions in the following table:
|fuel||<σv>/T²||penalty/bonus||reactivity||Lawson criterion||power density|
The maximum value of <σv>/T2 is taken from a previous table. The "penalty/bonus" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column "reactivity" are found by dividing 1.24×10-24 by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the D-T reaction under comparable conditions. The column "Lawson criterion" weights these results with Ech and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the D-T reaction. The last column is labeled "power density" and weights the practical reactivity with Efus. It indicates how much lower the fusion power density of the other reactions is compared to the D-T reaction and can be considered a measure of the economic potential.
Bremsstrahlung losses in quasineutral, isotropic plasmas
The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma. The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy (Bremsstrahlung). The sun and stars are opaque to x-rays, but essentially any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of reactor shield). The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows the rough optimum temperature and the power ratio at that temperature for several reactions.
The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the plasma is assumed to be composed purely of fuel ions. In practice, there will be a significant proportion of impurity ions, which will lower the ratio. In particular, the fusion products themselves must remain in the plasma until they have given up their energy, and will remain some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too.
The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product. This will not change the optimum operating point for D-T very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to D-T is even lower and the required confinement even more difficult to achieve. For D-D and D-3He, Bremsstrahlung losses will be a serious, possibly prohibitive problem. For 3He-3He, p-6Li and p-11B the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, anisotropic plasma impossible. Some ways out of this dilemma are considered—and rejected—in “Fundamental limitations on plasma fusion systems not in thermodynamic equilibrium” by Todd Rider. This limitation does not apply to non-neutral and anisotropic plasmas; however, these have their own challenges to contend with.
- ↑ Carl R. Nave, “The Most Tightly Bound Nuclei,” HyperPhysics. Retrieved December 26, 2007.
- ↑ “Desktop fusion is back on the table,” Nature.com. Retrieved December 26, 2007.
- ↑ “Supplementary methods for “Observation of nuclear fusion driven by a pyroelectric crystal,” Nature.com. Retrieved December 26, 2007.
- ↑ B. Naranjo, J. K. Gimzewski and S. Putterman, “Observation of nuclear fusion driven by a pyroelectric crystal,” UCLA (2005). Retrieved December 26, 2007.
- ↑ Phil Schewe and Ben Stein, “Pyrofusion: A Room-Temperature, Palm-Sized Nuclear Fusion Device.” AIP (2005). Retrieved December 26, 2007.
- ↑ Michelle Thaller, “Coming in out of the cold: Cold fusion, for real,” Christian Science Monitor (June 6, 2005). Retrieved December 26, 2007.
- ↑ “Nuclear fusion on the desktop ... really!” MSNBC.com (April 27, 2005). Retrieved December 26, 2007.
- ↑ Todd Rider, “Fundamental limitations on plasma fusion systems not in thermodynamic equilibrium” Ph.D. thesis, Massachusetts Institute of Technology. Abstract available online. Retrieved December 26, 2007.
- Krane, Kenneth S. and David Halliday. 1988. Introductory Nuclear Physics. New York: Wiley. ISBN 047180553X
- Martin, Brian. 2006. Nuclear and Particle Physics: An Introduction. Hoboken, NJ: Wiley. ISBN 0470025328
- Poenaru, D. N. 1996. Nuclear Decay Modes. Fundamental and Applied Nuclear Physics Series. Philadelphia, PA: Institute of Physics. ISBN 0750303387
- Tipler, Paul and Ralph Llewellyn. 2002. Modern Physics. 4th ed. New York: W.H. Freeman. ISBN 0716743450
All links retrieved December 26, 2007.
- Fusion.org.uk – United Kingdom Atomic Energy Authority
- Fusion Power Associates
- SCKCEN – Belgian Nuclear Research Centre
- Impulse Devices – A small California based company researching tabletop sonic bubble fusion
- “Research Uses Sonofusion to Generate Temperatures Hot Enough For Fusion” by Kenneth Chang, The New York Times – MIT
- Joint European Torus (JET) – Nuclear Fusion Research Facility
- What is Nuclear Fusion? – NuclearFiles.org
- Nuclear Fusion Animation – Atomic Archive.com
- Nuclear Fusion Explained – Atomic Archive.com
- “Chaos could keep fusion under control” by Geoff Brumfiel, Nature
- Nuclear Fusion – a long shot? by Craig Mackintosh
|Nuclear engineering||Nuclear physics | Nuclear fission | Nuclear fusion | Radiation | Ionizing radiation | Atomic nucleus | Nuclear reactor | Nuclear safety|
|Nuclear material||Nuclear fuel | Fertile material | Thorium | Uranium | Enriched uranium | Depleted uranium | Plutonium|
|Nuclear power||Nuclear power plant | Radioactive waste | Fusion power | Future energy development | Inertial fusion power plant | Pressurized water reactor | Boiling water reactor | Generation IV reactor | Fast breeder reactor | Fast neutron reactor | Magnox reactor | Advanced gas-cooled reactor | Gas-cooled fast reactor | Molten salt reactor | Liquid-metal-cooled reactor | Lead-cooled fast reactor | Sodium-cooled fast reactor | Supercritical water reactor | Very high temperature reactor | Pebble bed reactor | Integral Fast Reactor | Nuclear propulsion | Nuclear thermal rocket | Radioisotope thermoelectric generator|
|Nuclear medicine||PET | Radiation therapy | Tomotherapy | Proton therapy | Brachytherapy|
|Nuclear weapons||History of nuclear weapons | Nuclear warfare | Nuclear arms race | Nuclear weapon design | Effects of nuclear explosions | Nuclear testing | Nuclear delivery | Nuclear proliferation | List of states with nuclear weapons | List of nuclear tests|
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Nuclear_fusion | 13 |
85 | Math Lesson Plans
Students will compare prices to determine the best buy. They will collect data from newspaper advertisements and enter the data on a spreadsheet and create a word processing document complete with graphs to support their findings.
English and Math (Statistics, spreadsheets and graphs)
After reading 'Casey at the Bat' and discussing the sports 'hero' students use the Internet to research real baseball heroes. In groups, students compare individual baseball players' hitting and/or pitching statistics and nominate one baseball hero per group. Students then justify their conclusions by creating spreadsheets and generating graphs of the data. Finally, students individually write a short editorial which incorporates a graph and tells why their player is a hero based on overall achievement as well as statistics
Students are given a list of items purchased with prices, deposits, and withdrawals in a checking account format. They will create a spreadsheet by entering the information into the appropriate cells and the formula that is necessary for computation.
Word Processing & Internet Research (Math History & Mathematicians)
Mathematicians and scientists are people too!!!!! Did you know that Sir Isaac Newton was inspired to study mathematics after a fight with a school bully? Leonhard Euler wrote more mathematics than anyone -- even after he became totally blind!!!! Great mathematical or scientific discoveries were made by real people who experienced the same misunderstandings and frustrations that we all do when learning a new concept. The purpose of this activity is for you to research a person who made a math or science discovery happen.
Students will create a spreadsheet to match a given sample. This will include adjusting column width and row height, changing alignment, changing size and style of print, changing number format. They will learn the concept of compound interest and write formulas to complete the investment chart. They will create a line graph displaying the results of the chart.
Word Processing/ Spreadsheet (Scatter Plot Activity)
The students will compile and enter different sets of data on a spreadsheet to be used in constructing scatter plots. The data will be collected by completing different stations set up in the classroom. A word processing document that describes the different correlations found will also be created.
After completing a game-like simulation to determine a career and salary, student groups will collect information and decide what is the 'best' city in which to live. Students will individually choose a 'best' city, support their decision, and create a monthly budget for their 'best' city based on the salary from the simulation.
English and Math (graphs and spreadsheets)
Students in groups of two will create a newspaper using topics from an assigned chapter. Students will use ClarisWorks word processing, insert graphics, and create graphs using spreadsheets ?
Students will use the geosketch pad program to come up with a design given certain guidelines.
The class will research statistical information involving racial breakdown of the United States population by gender and ethnicity. These breakdowns could consist of issues such as population changes, number of AIDS cases, death rates and educational levels. This will give students an opportunity to use the Internet to access and analyze current data.
Part of the Fairfax County Numerical Reasoning POS is for the students to be able to create and design a spreadsheet. This lesson will familiarize students with searching for information on the Web. Integrating the data collected from the Web into a spreadsheet adds a sense of interest and ownership to the project. The students are researching a topic they are familiar with and which they like.
Integers and Literature :Using Timeler 4.0 Software
As an interdisciplinary project, students will demonstrate their understanding of the placement of integers on a number line. Students will place integers on the timeline corresponding to zero, the date of their birth. Events will relate to significant family and personal events before they were born and during their lives. In addition, students will place the publication or copyright dates of literary selections they have read throughout the year on the timeline.
Word Processing/Spreadsheet (Statistics/Scatter plots)
Students will test a hypothesis to see if there is a direct correlation between number of hours of TV viewing and academic grade. They will collect and compile data to enter on a spreadsheet and create a word processing document which will include data and graphs to support their hypothesis.
Geometer's Sketchpad and a computer
Students will investigate the rigid motions of reflection, rotation, translation, and dilation.
Computer with Microsoft Excel
A walk-through introducing the basics of spreadsheet development using Microsoft Excel.
The lesson includes research from web sites, copying and pasting graphics and/or text from the Internet to a word processing document and composing a final draft of a booklet.
Computers with Internet access, TI-82 or TI-83 graphing calculators
The students will research box plots on the Internet and then gather real life statistics. They will analyze the collected statistics by making multiple box plots on the TI-83 calculator and comparing the one variable statistics.
Students will create and maintain a spreadsheet budget based on a predetermined salary. They will have an opportunity to make ?investments and purchases.?
Computer with Data Explorer Software
Test scores from a recent test have been compiled from 3 classes and the data is given to the students for their analysis. In the computer lab the students will use Data Explorer to input the data and display as 3 stacked box plots. Students will analyze the data and write a paragraph of their analysis.
(Candy that is about the same size and weight), another type of weight (heavier candy or dimes), two meter sticks or yardsticks per group, small cup (I used the containers that holds a roll of film), wire, masking tape, graph paper, loose-leaf paper (for notes and to record information), straight edge, pencils, graphing calculators, ClarisWorks or Microsoft Word
Description of Lesson (includes context):
Before conducting this activity, students should have experience creating tables of values and plotting points on a coordinate plane. This lesson is a concrete example of linear progression. The abstract ideas of slope and intercepts are represented in this activity as the constant weights (slope) and the length of the spring with no weights (y-intercept). Students will have the opportunity to create tables of values and graph lines using paper and pencil and the graphing calculator. Students will use either ClarisWorks or Microsoft Word to answer questions focusing on the effects of slope and intercepts on the graph of a line.
Math Lesson Plans
Geometry Lesson Plans
Geometry lesson plans include lessons on how to draw basic, 3D shapes, so to better familiarize yourself with them. Effective mathematics lesson plans also will feature geometric basics such as an introduction to the number line, positive and negative numbers, and then also graphing actual points on a line. A lot of the lessons simply focus on basic shapes such as triangles, but also on all the different concepts that can be attributed to them. Here is a list of some of the most effective geometry lesson plans and also the materials on which to establish lessons.
· Math Forum: A collection of lesson plans centered around geometry.
· Teach-nology: Web page that features long list of links to geometry lessons.
· Area & Perimeter using Geoboards: Lesson for middle schoolers on understanding area and perimeter.
· Educator’s Reference Desk: Lesson plans for many grade levels on geometry.
· Mathematics CLG: Geometry lesson that comes from the Maryland school system.
· Illuminations: Geometry lesson centering on the Golden Ratio.
· Wolfram: Downloadable lesson on vector geometry solutions.
· ScienceU: Links to many formulas that help students calculate areas, angles, intercepts, and more.
· PBS: Geometry lesson based on the Ken Burns documentary on baseball.
Fractions Lesson Plans
Fractions are one of the most important concepts in math lessons that students ought to master. They are found in other aspects of math, from division to calculating interest to investing in the stock market. Fractions lesson plans usually center on operations with fractions, the concept of equivalent fractions, and converting them to decimals. Here is a list of some of the best fractions lesson plans to assure that students in school improve their ability at fractions.
· Fraction Shapes: Lesson devoted to teaching students about fractions by incorporating shapes.
· Identify Fractions: Lessons on how to identify fractions.
· IXL Math: Geometry lesson for seventh-graders on the traversal of parallel lines.
· Instructor Web: Lessons for middle school students on multiplying fractions.
· Fractions - Word Problems Lesson Plan: A lesson on fractions that uses word problems to teach students in middle school.
· AAA Math: Lesson involving reciprocals of fractions.
· Emathematics: Lesson on reducing or simplifying fractions.
· Equivalent Fractions Game: Lesson for students in seventh to ninth grade on equivalent fractions.
· Utah Education Network: Fractions lesson using dominoes to learn about common denominators.
· Math can take you places: A fractions lesson involving real-world utilization of fractions.
Algebra Lesson Plans
Algebra is best thought of as the branch of math that is focused on the rules of relations and operations, including the concepts and constructions that emerge from them. Algebra generally comes in two forms, elementary algebra and abstract algebra. Elementary algebra concerns itself with polynomials, while abstract algebra allows for binary operations and the properties of sets. Here is a list of links to excellent algebra lesson plans that can be used as lessons themselves, or just as the basis for lessons you create yourself.
· Math for Morons like us: Algebra lessons presented in an easy-to-understand way for middle schoolers.
· Math Mediator: Lessons on algebra about quadratics and square roots.
· Solving Basic Algebra: Quick lesson on how to solve the easiest of algebraic problems.
· Purplemath: Lesson on basic algebra involving canceling terms or units.
· Algebra Help: Explains the basic of algebra in an easy lesson.
· Lesson 1: Algebra lesson that includes information on how to use your calculator.
· Order of Operations: Comprehensive lesson on the order of operations.
· Lesson Tutor: Algebra lesson for ninth-graders that focuses on translating words to numbers.
· Lesson plan for balancing equations: Algebra lesson about basic equations.
· Ed Helper: Lessons in basic algebra delivered in a test-like format.
Other Mathematics Lesson Plans
Other mathematics lesson plans cover any and all areas of math that do not fall into the categories of geometry, fractions, or algebra. This can include simply basic arithmetic, analysis, or the more advanced field of calculus. These lesson plans for math can consist of the very easy like arithmetic that is encountered in grade school to the very sophisticated like calculus that is encountered in higher learning. Here is a list of links to lesson plans focused on other areas of math.
· Arithmetic Lesson Plans: Web page of many links dedicated to arithmetic lessons.
· Practicing Arithmetic: Lessons that will sharpen students’ skills in arithmetic.
· The Calculus Page Problems List: Long list of calculus problems to help out students in understanding this concept.
· Calculus 1: Lessons detailing introductory calculus.
· Special Functions: Features a lesson for those needing help with calculus.
· Energy Conservation: A math lesson for middle schoolers that focuses on energy conservation.
· Mathematical Analysis of a DC Motor: Lesson that uses math analysis to study a motor.
· Thirteen: Another math lesson that relies on mathematical analysis to develop solutions.
· Sixth Grade Math: An enormous list of links to lessons of all kinds of concepts for sixth-graders.
· Math 6 Spy Guys: Site that is committed to providing many math lessons across a plethora of concepts for middle schoolers. | http://www.middleschool.net/lesspln/mathF/mathlp.htm | 13 |
55 | of the Earth and the Missing Heat Source Mystery
heat flow estimates range from 30 to 44 TW (Table
1a; Reference List A). Estimates of the radiogenic
contribution (from the decay of U, Th and K in the
mantle), based on cosmochemical considerations,
vary from 19 to 31 TW (Table 1b). Thus, there is
either a good balance between current input and
output, as was once believed (“the Chondritic
Coincidence”), or there is a serious missing
heat source problem, up to a deficit of 25 TW. Attempts
to solve the perceived deficit problem include invoking
secular cooling and deep, hidden heat-source layers
(e.g., Kellogg et al., 1999).
many studies it has been assumed that there should
be a steady-state balance, or close to it, between
current radioactive heat production in the mantle
and current heat flow, and that very little heat
is generated in the upper mantle. This view ignores
mass balance considerations, other sources of energy
(Reference List C), secular cooling, delays in the
system, and the wide range of radioactive contents
of upper mantle materials. Among the problems commonly
cited with various models of mantle thermal history
are very high mantle temperatures in the Archaean,
survival of ancient cratonic roots, komatiitic temperatures,
over-heating of the lower mantle, freezing of the
core, an imbalance between the helium and heat flow
budgets and a perceived missing heat source. A common
perception is that there is not enough radioactivity
in the upper mantle to provide a significant contribution
to current heat flow. These problems can be avoided
by recognizing that:
- heat flow is a three-dimensional
problem and that heat flow is diverted into the ocean
- continents tend to move towards
cold downwelling mantle;
- at high mantle temperatures water
is removed from both the mantle and the lithosphere,
stiffening the system;
- the mantle is probably chemically
layered, extending the cooling time, and
- the upper mantle cannot be entirely
composed of ultradepleted
MORB and barren peridotite.
is useful to compile the sources of energy and global
heat flow before addressing perceived problems.
It is also useful to investigate possible shortcomings
of various theoretical models. It turns out that
the unknown hydrothermal contribution to cooling
of old oceanic lithosphere and the temperature dependence
of thermal conductivity are key issues (Hofmeister
2003). If lattice and radiative conductivity
are as high as currently calculated, heat that leaves
the core is mostly conducted down the core adiabat
(Gubbins, 1977), and contributes to the bulk
mantle energy budget, rather than be piped directly
to the surface e.g., in plumes (Stacey &
Stacey, 1999). The present-day heat flow through
the surface of the Earth is consistent with energy
sources in the interior, including secular cooling,
the gravitational contraction associated with cooling,
and decline of radioactive abundances. Theoretical
corrections to observed heat flow, and theoretical
estimates of expected oceanic heat flow are both
uncertain and model-dependent. Within the uncertainties
of data and theory, there is no missing heat source
paradox or need for substantial contribution to
observed heat flow from the deep mantle.
Distribution of radioactive elements
decay is only one of the sources of mantle heat
flow (Reference List C). In most current models
of geodynamics and geochemistry the lower mantle
is assumed to have escaped accretional differentiation
and to retain primordial values of radioactivity
and noble gases. The crust is assumed to have been
derived from only the upper mantle, making it extraordinarily
depleted in radioactive and volatile elements. The
heat productivity of the most U-poor mid-ocean ridge
basalts are taken as upper bounds on the heat productivity
of the whole mantle above the 650-km phase change.
This combination of assumptions, plus neglect of
non-radiogenic sources of heat, have led to the
view by some that there is a missing energy source
in the Earth.
balance calculations and the 40Ar content
of the atmosphere show that most, if not all, of
the mantle must have been processed and degassed
in order to explain the concentrations of incompatible
and volatile elements in the outer layers of the
Earth (Reference List B). The planetary accretional
zone-refining process results in an outer shell
that contains most of the U, Th and K, at a level
about three times chondritic (if the outer shell
is equated with the present mantle above 650 km),
from which the proto-crust and basaltic reservoirs
were formed. The residual (current) upper mantle
retains radioactive abundances greater than chondritic
while the bulk of the mantle, including the lower
mantle, is essentially barren (Anderson,
1989). The outer shells of Earth contain both depleted
and enriched sources and probably also contain the
bulk of the terrestrial inventory of noble gases
(Meibom & Anderson, 2003).
a layered model the delay between heat generation
and conduction through the surface, and the secular
cooling of the Earth, extends the thermal evolution.
As the mantle cools, it becomes less molten, degasses
less readily and probably becomes more volatile-rich,
since it is now a sink for CO2 and water.
It thus experiences different styles of convection
and cooling. These factors are not considered in
currently popular models used by convective modelers,
leading to apparent paradoxes.
other recent models the entire mantle is assumed
to have escaped chemical differentiation, except
for crust extraction, and to convect as a unit,
with material circulating freely from top to bottom.
Recycled material is quickly stirred back into the
whole mantle. In these models:
heterogeneities are embedded in a depleted matrix,
mantle is treated as uniformly heterogeneous,
is relatively cold and the hot thermal boundary
layer above the core plays an essential role in
piping heat to the surface.
assumed to be an effective homogenizer. Recent amendments
to this idea assert that there must be a radioactive-rich
layer deep in the mantle, but this is based on unlikely
assumptions about upper mantle radioactivity. Most
convection calculations ignore accretional differentiation
and the effects of pressure and temperature on thermal
properties such as thermal expansion and thermal conductivity.
Current models assume that heat from the core and
heat from the mantle are decoupled in the sense that
plumes remove core heat and plate tectonics removes
mantle heat (e.g., Stacey, 1992).
there is no missing energy when direct heat flow
data are used, there is a mismatch between oceanic
heat flow and theoretical predictions from the cooling
plate model. In some compilations, about 12 TW is
added to the measured global heat flow to match
the square-root-of-age predictions. This is the
same size as the perceived “missing”
heat energy (Hofmeister
1a & b summarize estimates of global heat flow
and heat sources in the mantle (Reference Lists
A & C). The references are subdivided by subject.
See also other sections on heat flow and temperature
and mechanisms of heat loss
through the interior of the Earth by radiation, conduction
and convection. Eventually, all heat flows through the
surface boundary layer, primarily by lattice conductivity
but some by dikes, volcanoes and hydrothermal activity.
boundary condition has changed with time. As a planet
cools it evolves from a magma ocean regime (the result
of accretional heating and gravitational differentiation),
to a thick buoyant surface layer (unsubductable basalt
and refractory residue) with heat pipes and heat sheets
(permeable plates), to plate tectonics, and finally
to stagnant lid. The present continental crust, asthenosphere
and olivine-rich (pyrolitic) upper mantle are most likely
reprocessed residues from early differentiation. Currently,
the surface boundary layer is a conduction boundary
layer with an average thickness of 100-200 km. It is
pierced in places by volcanoes that deliver a relatively
small amount of heat to the surface via magma. The cooling
of the mantle is mainly accomplished by the cooling
of the surface plates.
In early Earth
history a transient magma ocean allowed magmas to transfer
their heat directly to the atmosphere. As buoyant material
collected at the top, the partially molten interior
became isolated from the surface. Magma, however, could
break through and create “heat pipes” to
carry magma and heat to the surface. Io and Venus may
utilize this mechanism of heat transfer. The surface
boundary condition in these cases can be viewed as a
permeable plate. Present day plates can be penetrated
by sills and dikes and are therefore partially permeable.
As a planet cools further it may jump to a stagnant-lid
state with a convecting interior. Mars and Moon may
be in such a state.
the Earth’s interior is cooling by a combination
of thermal conduction through the surface and the
advection of cold material to the interior by slabs
and delaminated continental crust. The heat generated
in the interior of the Earth, integrated over some
delay time, is transferred to the surface conduction
boundary layer by a combination of solid-state convection,
fluid flow, radiation and conduction. Crustal radioactivity
is a major contributor to continental heat flow.
Delaminated crustal blobs may contain much of the
radioactivity in the mantle.
In the ocean
basins the main contribution to the observed heat flow
is the transient effect of the formation of the oceanic
crust (Reference List F). Theoretically, this conducted
heat flow should wane as the square-root of age but
surprisingly, it is nearly constant. The background
heat flow is nearly the same as under continents. In
contrast to predictions from the plate and cooling half-space
models there is little correlation of heat flow with
age or depth of the ocean. There is also little evidence
that hotspots or swells are associated with high heat
flow (see Heatflow page;
& Stein, 2003). This indicates that the underlying
mantle is not isothermal or homogeneous.
decreases rapidly with increasing temperature. The cold
outer shell of the Earth is not simply a cooling boundary
layer of uniform composition and conductivity, losing
heat by conduction alone, as assumed in cooling-plate
and half-space models (Reference List F). The fact that
heat flow is not a function of square-root of age suggests
that some process affects the near-surface thermal gradient
without affecting the integrated density of the outer
layers. Oceanic swells, on the
other hand, apparently require a redistribution of
mass and density. They do not appear to be purely thermal
in origin. They are probably largely held up by the
buoyancy of depleted harzburgite (the perisphere).
loss from the continents
The mean heat
flow from continents is about 80 mW/m2 (Reference
List A). The heat flow that can be attributed to the
continental crust itself is about half of this, or 32-40
mW/m2, though recent estimates of the average
U, Th and K concentrations vary by almost a factor of
2. The other half thus comes from the mantle. The continental
crust therefore accounts for 5.8 - 8 TW of the global
loss from the oceans
Because of sparse
coverage, heat flow data must be averaged by age and
by area of the seafloor. These estimates give about
62 mW/m2 for the average oceanic heat flow
2003). About half of this is a transient effect
from the plate-forming process and half is the background
flux from the mantle. Measured oceanic heat flow varies
from about 300 to 25 mW/m2 with 45 to 55
mW/m2 being a representative range through
old oceanic crust. The theoretical value for half-space
cooling is ~ 101 mW/m2 (Pollack et al.,
1993) but this is sensitive to values adopted for thermal
conductivity of the mantle and crust. Theoretical plate
cooling model values are infinity at zero age and 100
mW/m2 at 30 Ma. The theoretical value at
large time depends on preselected parameters and boundary
conditions. Theoretical “corrections” to
measured heat flow are thus uncertain.
cooling accounts for about 1 TW of the global heat flux.
The extent of hydrothermal cooling due to off-axis circulation
of cold water is usually taken as the difference between
the predictions of the plate cooling model and the observed
conducted heat flow but there is no theoretical basis
for this. The plate cooling model predicts higher heat
flows than observed out to ~ 50 Ma, and lower than observed
thereafter. Some of the differences between the measured
and theoretical heat flows are due to causes other than
hydrothermal circulation, including off-axis intrusions
and lateral variability in mantle potential temperature.
The mean oceanic
heat flux is sometimes determined by fitting the parameters
of a half-space or plate cooling model to bathymetry
and heat-flow data on older oceanic lithosphere. This
approach substantially overestimates the heat flux for
ages less than 40 My but the discrepancy persists out
to 60 My. The calculated Quaternary flux exceeds the
measurements by 500% (Hofmeister
2003). The mean calculated oceanic value (Pollack
et al., 1993) is about double the median observed
flux of 65 mW/m2 for the oceans and 61 mW/m2
for the continents. Theoretical cooling models generally
use a constant conductivity and ignore its temperature
dependence. In addition, the hydrothermal contribution
has been overestimated. According to Hofmeister
(2003), both the theoretical heat flux values and
the hydrothermal contribution should be reduced. In
addition to the conducted heat, other processes which
affect the heat flow as a function of age include intrusion,
underplating, stress changes in the plate and serpentization,
which are problematic to assess. Table 1a includes estimates
of near-ridge hydrothermal circulation but otherwise
tabulates the measured conducted heat flow values.
background variations in heat flow
and plate models attribute all variations in bathymetry
and heat flow to conductive cooling as a function of
time. However, mantle convection and plate tectonics
could not exist, and are inconsistent, with an isothermal
mantle. Lateral temperature variations of the mantle
below the plate of at least ± 100°C are expected.
For a 100-km-thick thermal boundary layer this implies
heat flow variations of about ±15% superposed on normal
cooling curves. Variations in permeability at the top
of the plate cause variations in the hydrothermal component
of heat flow, and this component of heat flow must be
allowed for separately. The important point here is
that temporal changes in surface heat flux must be considered
before concluding that an “energy crisis”
Global heat flow variations can be estimated
using seafloor age reconstructions [Reference list
et al. (2007) and references therein]. Loyd
et al. (2007)
show that heat flow has decreased by 0.15% every million
years during the Cenozoic due to a decrease in the
area of ridge-proximal oceanic crust. This is an order
of magnitude faster than estimates based on smooth,
parameterized cooling models. This implies that heat
flow experiences short-term fluctuations associated
with plate tectonic cyclicity.
The cooling plate model assumes that the mantle
beneath the plates is isothermal and homogeneous and
has the same potential temperature as midocean ridges
basalts. Tomography shows that the subplate mantle is
of terrestrial abundances of the heat-producing elements
depend on meteorite compositions (Reference List C).
Carbonaceous chondrites are the usual choice, but enstatite
achondrites and meteorite mixes are also used. The Earth
is unlikely to match any given class of meteorite since
it condensed and accreted over a range of temperatures
from a range of starting materials. The refractory elements
are likely to occur in the Earth in cosmic ratios, but
the volatile elements are depleted. The large metallic
core indicates that the Earth, as a whole, is a reduced
body, although at least the crust and the outer shells
of the mantle are oxidized. Enstatite achondrites match
the Earth in the amount of reduced iron (oxidation state)
and in oxygen isotopic composition and have been used
to estimate terrestrial abundances.
the heating potential of the Bulk Silicate Earth (BSE
= crust + mantle) range from 12.7 to 31 TW, although
most authors obtain values in a much more restricted
range (Table 1b). These are present-day instantaneous
values. Heat conducted through the surface was generated
some time ago, when the radioactive abundances were
higher, so estimates of radioactive heating, based on
current radioactive contents, are lower bounds on the
contribution of radioactive elements to the present-day
surface heat flow, assuming that the estimates of U,
Th, and K are realistic. The allowable variation in
U and Th contents of the mantle is a large fraction
of the postulated discrepancy between production and
heat flow. Production of heat can be much larger if
K contents have been underestimated. It is of interest
that, because of the short half-life of 40K,
most of the 40Ar in the atmosphere was generated
in early Earth history. More efficient degassing then
may partly explain the large fraction of the terrestrial
40Ar that is in the atmosphere.
The amount of
radioactivity in the crust must be subtracted out in
order to obtain mantle abundances and heat productivities.
Using 8 TW as the best estimate of crustal productivity
gives < 23 TW as the current energy output from mantle
radioactivity. Heat from the core (about 9 TW), solid
Earth tides (1 to 2 TW) and thermal contraction (~ 2
TW) are non-radiogenic sources that may add 12-13 TW
to the mantle heat flow, about the same as the current
(non-delayed) mantle radiogenic contribution. The radiogenic
contribution can be increased by about 25% if it takes
1 Gyr to reach the base of the lithosphere. On top of
all this is secular cooling of the mantle. In a chemically
stratified mantle, the outer layers cool much faster
than the deeper layers. If cooling is confined to the
upper 1,000 km a temperature drop of 50 K/Ga corresponds
to a heat flow of 3 TW. Cooling rates of twice this
value may be plausible (Reference List D).
There thus appears
to be no need for any exotic heat sources or hidden
sources of radioactivity in the mantle. This conclusion
is independent of the uncertain contribution of hydrothermal
circulation to the surface heat flow. There are implications,
however, for the temperatures of the Archean mantle
and the style of convection, and the mechanisms of heat
removal (Reference List D). The present styles of mantle
convection and plate tectonics are unlikely to have
operated in the Archean (Hamilton, 2003).
1: a) Measurements and estimates of global heat flow.
Some of these are from a recent review by Hofmeister
(2003) and some are from standard sources (e.g.,
AGU Handbooks). b) Sources of thermal energy in the
Earth's interior (from Reference Lists A, C & E).
of world-wide measurements
for ridge effect
induced gravitational energy
(possible non-radiogenic and core sources)
(1 to 2 Ga delay between production & arrival
cooling (Korenaga, 2003; Schubert
et al., 1980)
It is often assumed
that the melting involved in hotspot magmatism requires
an anomalous source of energy, e.g., plumes or
frictional heating. Heat is indeed required, and invoking
a source with a lower melting point e.g., eclogite,
does not remove this problem since latent heat of melting
is still required.
The mantle is
a much larger source of energy than the core. Melting
can occur by placing material with a low melting point
(e.g., basalt, eclogite, pyroxenite, piclogite)
into the shallow mantle by subduction or delamination
of continental lithosphere. The surrounding mantle serves
as a heat source and the subducted/delaminated material
as a heat sink. At thermal equilibrium the slabs, at
least their upper portions, will be partially molten,
except in the coldest parts of the shallow mantle. If
the slab is rich in volatiles (H2O, CO2)
this will reduce seismic velocity, even though the slab
is cooler than the surrounding mantle. Subduction refertilizes
the mantle, cools it, and causes major local thermal
Some of the seismically
slowest regions on Earth – low velocity zones
(LVZ) – are at the tops of and above subducting
slabs (mantle wedge, back-arc basins). These regions
are cooled by subduction. The LVZ extend to depths of
200-300 km. Presumably if such convergent regions turn
into diverging regions, such as when old sutures are
reactivated during continental breakup, the fertile,
volatile-rich regions will melt and may even provide
dense sinkers which liberate volatiles as they sink
(see Lithospheric Delamination
page) giving deep LVZ and basaltic melts on top.
A sinking carbonated eclogitic slab can have low seismic
and delaminated eclogite accumulating at say 650
km can be entrained into upwelling flows when continents
diverge and can be part of the basaltic source material.
Thus, slabs do not need to be carried around by continents
in order to explain slab components in hotspot magmas,
although some slab material will certainly be trapped
between suturing Archaean cratons. The largest divergence
suction is expected with thick separating cratons,
whereas thin diverging lithosphere will suck up mainly
shallow material. Material in the transition region
can also be displaced by sinking slabs.
The heat budget
of the Earth cannot be treated as an instantaneous one-dimensional
heat flow problem, or one that involves a homogeneous
mantle with uniform and static boundary conditions.
Both the radial and lateral structure of the Earth must
be considered. Continents affect mantle heat flow by
diverting heat to the ocean basins (Lenardic,
1998) and drifting so as to be over cold downwellings
(Reference List E). Chemical stratification (Reference
List B) of the mantle slows down cooling of the Earth
but the upward concentration of radioactive elements
reduces the time between heat generation and surface
heat flow. Nevertheless, one-dimensional and homogeneous
models, or models with a downward increase in radioactive
heating have dominated the attention of convection modelers
(Reference List D). Paradoxes such as the “missing
heat source problem” can be traced to non-realistic
assumptions and initial and boundary conditions.
The major outstanding
problems in the Earth's thermal budget and history involve
the role of hydrothermal circulation near the top, and
radiative heat transfer near the bottom, of the mantle.
Convection modeling has not yet covered the parameter
range which may be most pertinent from physical considerations
and geophysical data. What is needed is a thermodynamically
self-consistent approach which includes the temperature,
pressure and volume-dependence of physical properties,
realistic initial and boundary conditions, and the ability
to model melting and various forms of heat transport.
The bottom line
is that there appears to be no mismatch between observed
heat flow and plausible sources of heating. The rate
at which the mantle is losing heat appears to have been
overestimated in the past, and the available energy
sources in the interior have been underestimated. The
uncertainties in the various estimates have not been
fully appreciated (see Hofmeister
2003, for a discussion). Lord Kelvin also underestimated
the errors in his estimates of the age of the Earth,
and neglected, understandably, a significant heat source.
Heat flow compilations
Jaupart, C. & Mareschal, J.-C.,
2003, Treatise on Geochemistry, in Heinrich
D. Holland & Karl K. Turekian, Eds, Constraints
on Crustal Heat Production from Heat Flow Data,
Jaupart, C. & J.-C. Mareschal,
1999, The thermal structure of continental roots,
Lithos, 48, 93-114.
Nyblade, A.A., & Pollack,
H.N., 1993, A global analysis of heat flow from
Precambrian terrains: implications for the thermal
structure of Archean and Proterozoic lithosphere,
J. Geophys. Res., 98,
O'Connell, R. J. & B. H. Hager,
1980, On the thermal state of the Earth, in
A. Dziewonski & E. Boschi, Eds., Physics
of the Earth's Interior, Proc. Enrico Fermi
Int. Sch. Phys., 78, 270-317.
Pollack, H.N., Hurter, S.J., &
Johnson, J.R., 1993, Heat loss from the Earth's
interior: analysis of the global data set, Rev.
Geophys., 31, 267-280.
Pollack, H. N., 1982, The heat
flow from the continents, Ann. Rev. Earth
Planet. Sci., 10, 459-481.
Sclater, J. G., Jaupart, C. &
Galson, D., , The heat flow through oceanic
and continental crust and the heat loss
of the Earth, Rev. Geophys. Space Phys., 18,
Stein, C., & D. Abbott, 1991,
Heat-flow constraints on the south-Pacific superswell,
J. Geophys. Res., 96,
Stein, C.A., & S. Stein,
1992, A model for the global variation in
oceanic depth and heat-flow with lithospheric
Stein, C.A., & S. Stein,
1993, Constraints on Pacific midplate swells
from global depth-age and heat flow-age models,
in The Mesozoic
Pacific: Geology, Tectonics, and Volcanism,
53-76, American Geophysical Union, Washington,
von Herzen, R.P., M.J. Cordery,
R.S. Detrick, & C. Fang, 1989, Heat-flow
and the thermal origin of hot spot swells -
the Hawaiian swell revisited, J. Geophys.
Res., 94, 13,783-13,799.
I., & Pollack, H. N., 1980, On the variation
of continental heat flow with age and the
thermal evolution of continents, J. Geophys.
Res., 85, 983-995.
B. Chemical stratification
and distribution of radioactive elements
Agee, C. B., & D. Walker,
1993, Olivine flotation in mantle melt, Earth
Plant. Sci. Lett., 114,
Agee, C. B. & Walker, D.,
1988, Mass balance and phase density constraints
on early differentiation of chondritic mantle,
Earth Planet. Sci. Lett., 90,
Anderson, D.L., 1989, Theory
of the Earth
, Blackwell Scientific Publications,
Boston, 366 pp. http://resolver.caltech.edu/CaltechBOOK:1989.001
Anderson, D. L., 1989, Where on
Earth is the Crust?, Physics Today, 42,
Anderson, D L , 2002, The Case
for Irreversible Chemical Stratification of
the Mantle, Int. Geol. Rev., 44,
Anderson, D.L. 2005. Self-gravity,
self-consistency, and self-organization in
geodynamics and geochemistry, in Earth's
Deep Mantle: Structure, Composition, and Evolution,
Eds. R.D. van der Hilst, J. Bass, J. Matas & J.
Trampert, AGU Geophysical Monograph Series
160, pp. 165-186.
Anderson, D.L., 2005, Scoring
hotspots: The plume and plate paradigms, in
Foulger, G.R., Natland, J.H., Presnall, D.C.,
and Anderson, D.L., eds., Plates,
plumes, and paradigms: Geological Society
of America Special Paper 388, 31-54.
D.L., 2006, Speculations on the nature and
cause of mantle heterogeneity, Tectonophysics, 416,
Boschi, L. & Dziewonski,
A. M., 1999, “High” and “low”
resolution images of the Earth's mantle - Implications
of different approaches to tomographic modeling,
J. Geophys. Res., 104,
Clark, S. P., & Turekian,
K. K., 1979, Thermal constraints on the distribution
of long-lived radioactive elements in the Earth,
Phil. Trans. R.Soc. Lond., 291,
Cousens, B.L. et al., 2001, Enriched
Archean lithospheric mantle beneath western
Churchill Province, Geology, 29,
Cserepes, L., Yuen, D. A., &
Schroeder, B. A., 2000, Effect of the mid-mantle
viscosity and phase-transition structure of
3D mantle convection, Phys. Earth. Planet.
Int., 118, 135-148.
Davaille, A., 1999, Two-layer
thermal convection in miscible viscous fluids.
J. Fluid Mech., 379,
Davies, G. F., 2000, Dynamic
Earth: Plates, Plumes and Mantle Convection,
Cambridge University Press, Cambridge, 458 pp.
Fukao Y, Obayashi M, Inoue H,
Nenbai M., 1992, Subducting slabs stagnant in
the mantle transition zone, J. Geophys. Res.,
Fukao, Y., Widiyantoro, S., Obayashi,
M., 2001, Stagnant slabs in the upper and lower
mantle transition region, Rev. Geophy.,
Gasparik, T., 1993, The role of
volatiles in the transition zone, J. Geophys.
Res., 98, 4287-4299.
Gasparik, T., 1997, A model for
the layered upper mantle, Phys. Earth Planet.
Int., 100, 197-212.
Wen, Lianxing & D. L. Anderson,
1995, The Fate of Slabs Inferred from Seismic
Tomography and 130 Million Years of Subduction,
Earth Planet Sci. Lett., 133,
Wen, L. & D. L. Anderson,
1997, Layered mantle convection: A model for
geoid and topography, Earth Planet. Sci.
Lett., 146, 367-377.
C. Energy sources in the Earth
Birch, F., 1965, Energetics of
core formation, J. Geophys. Res., 70,
Chao, B.F., Gross, R. & Dong,
D.-N., 1995, Changes in global gravitational
energy induced by earthquakes, Geophys. J.
Int., 122, 784-789
Flasar, F. M. & Birch, F.,
1973, Energetics of core formation: a correction.
J. Geophys. Res., 78,
Gubbins, D., 1977, Energetics
of the Earth's core, J. Geophys., 43,
Gubbins, D., Masters, T.G. &
Jacobs, J.A., 1979. Thermal evolution of the
Earth's core, Geophys. J. R. astr. Soc.,
Javoy, M., 1999, Chemical Earth
models, C. R. Acad. Sci. Paris, 329,
Jochum, K. P., Hofmann, A. W.,
Ito, E., Seufert, H. M. & W.M. White, 1986,
K, U and Th in mid-ocean ridge basalt glasses
and heat production, K/U and K/Rb in the mantle,
Nature, 306, 431-436.
McDonough, W. F., 1995, The composition
of the Earth, Chem. Geol., 120,
Rudnick, R. L., McDonough, W.
F. & O'Connell, R. J., 1998, Thermal structure,
thickness and composition of continental lithosphere.
Chem. Geol., 145, 395-411.
Stacey, F.D. & Stacey, C.H.B.,
1999, Gravitational energy of core evolution:
implications for thermal history and geodynamo
power, Phys. Earth Planet. Inter., 110,
Van Schmus W.R., 1995, Natural
radioactivity of the crust and mantle, in A
Handbook of physical constants, AGU
References shelf 1. Ed. T.J. Ahrens, AGU, Washington
DC, pp. 283-291.
Verhoogen, J., 1980, Energetics
of the Earth, Nat. Acad. Sci., Washington,
DC, 139 pp.
White, W.M. 1983, K, U and Th
in mid-ocean ridge basalt glasses and heat
production. K/U and K/Rb in themantle. Nature, 306,
Anderson, D.L., 2007, The Eclogite
engine: Chemical geodynamics as a Galileo thermometer.
In: Foulger, G. R. & Jurdy, D. M. (eds.)
Plates, Plumes and Planetary
Society of America, Special Paper 430, 47–64.
Christensen, U., 1985, Thermal
evolution models for the Earth, J. Geophys.
Res., 90, 2995-3007.
Coltice, N., & Ricard, Y.,
1999, Geochemical observations and one layer
mantle convection, Earth Planet. Sci. Lett.,
Conrad, C. P., & B. H. Hager,
2001, Mantle convection with strong subduction
zones, Geophys. J. Int., 144,
- Kaula, W.M., 1983, Minimal upper mantle temperature
variations consistent with observed heat flow and
plate velocities, J. Geophys.
Res., 88, 10,323-10,332.
Korenaga, J., 2003, Energetics
of mantle convection and the fate of fossil
heat, Geophys. Res. Lett., 30,
Korenaga, J., Urey
ratio and the structure and evolution of Earth's
Geophys., 46, RG2007,
Korenaga, J., 2008, Comment on "Intermittent
plate tectonics?, Science, 320,
Loyd, S. J., T. W. Becker, C.
P. Conrad, C. Lithgow-Bertelloni & F. A.
Corsetti, 2007, Time-variability in Cenozoic
reconstructions of mantle heat flow: Plate
tectonic cycles and implications for Earth’s
thermal evolution, Proceedings
of the National Academy of Science, U.S.A., 104,
McNamara, A.K., & P.E. van
Keken, P.E., 2000, Cooling of the Earth: A parameterized
convection study of whole versus layered models,
Geochemistry, Geophysics, Geosystems,
Tozer, D. C., 1972, The present
thermal state of the terrestrial planets, Phys.
Earth Planet. Inter., 6,
Schubert, G., Stevenson, D. &
Cassen, P., 1980, Whole planet cooling and the
radiogenic heat source contents of the Earth
and Moon, J. Geophys. Res., 85,
Schubert, G., Turcotte, D., Olson,
P., 2001, Mantle convection in the Earth
and planets, C. U. Press, 956 pp.
Stacey, F. D., 1992, Physics
of the Earth, 2nd Ed. Brisbane, Brookfield
Stacey, F. D. & Loper, D.
E., 1984, Thermal histories of the core and
mantle, Phys. Earth Planet. Inter., 36,
Stevenson, D., Spohn, T. &
Schubert, G., 1983, Magnetism and thermal evolution
of the terrestrial planets, Icarus, 54,
Tackley, P., 1998, Three dimensional
simulations of mantle convection with a thermo-chemical
basal boundary layer: in: M. Gurnis, M. et al.,
eds., The Core-Mantle Boundary Region,
Washington, AGU, 334 pp.
Thompson, Sir W. (Lord Kelvin),
1890, On the Secular cooling of the Earth.
Mathematical and Physical Papers, Vol III,
Elasticity, Heat, Electro-Magnetism. London:
C.J. Clay and sons, pp. 295-311.
Van Keken PE, Ballentine C.J.,
1998, Whole-mantle versus layered mantle convection
and the role of a high-viscosity lower mantle
in terrestrial volatile evolution, Earth
Planet. Sci. Lett., 156,
Van Keken P.E., Ballentine C.J.,
1999,. Dynamical models of mantle volatile
evolution and the role of phase transitions
and temperature-dependent rheology, J.
Geophys. Res., 104,
Guillou, L. & Jaupart, C.,
1995, On the effect of continents on mantle
convection, J. Geophys. Res., 100,
Lenardic, A., & L.-N. Moresi,
2001, Heat flux scalings for mantle convection
below a conducting lid: resolving seemingly
inconsistent modeling results regarding continental
heat flow, Geophys. Res. Lett., 28,
Lenardic, A., L. Guillou-Frottier,
J.-C. Mareschal, C. Jaupart, L.-N. Moresi, &
W.M. Kaula, 2000, What the mantle sees: the
effects of continents on mantle heat flow, In,
The History and Dynamics of Global Plate
Motions, Ed: M. Richards, R. Gordon, &
R. van der Hilst, AGU Press, 95-112.
Lenardic, A., 1998, On the partitioning
of mantle heat loss below oceans and continents
over time and its relationship to the Archean
paradox, Geophys. J. Int., 134,
Crough, S.T., 1983, Hotspot swells,
Ann. Rev. Earth Planet. Sci., 11,
McNutt, M. K. & A. V. Judge,
1990, The Superswell and mantle dynamics beneath
the South Pacific, Science, 248,
Parsons, B. & J. G.
Sclater, 1977, An analysis of the variation
of the ocean floor bathymetry and heat flow
with age, J. Geophys. Res., 82,
- Phipps Morgan, J., W. J. Morgan, and E. Price,
1995, Hot spot melting generates both hot spot
volcanism and a hot spot swell? J. Geophys.
Res., 100, 8045-8062.
Phipps Morgan, J. & W. H.
F. Smith, 1992, Flattening of the seafloor
depth-age curve as a response to asthenospheric
Nature, 359, 524-527.
Phipps Morgan, J. & W. H.
F. Smith, 1994, Correction: Flattening of the
seafloor depth-age curve as a response to asthenospheric
flow, Nature, 371, 83.
Rowley, D.B., 2002, Rate of plate
creation and destruction: 180 Ma to Present,
Geol. Soc. Am. Bull., 114,
Sandwell, D. T., E. L. Winterer,
J. Mammerickx, R. A. Duncan, M. A. Lynch, D.
A. Levitt, and C. L. Johnson, 1995, Evidence
for diffuse extension of the Pacific plate
from the Pukapuka Ridges and crossgrain gravity
lineations, J. Geophys.
Res., 100, 15087-15099.
N.H., 1994, Lithospheric thinning by midplate
mantle plumes and the thermal history of hot
plume material ponded at sublithospheric depths, J. Geophys. Res., 99,
Smith, W. H. F. & J. Phipps
Morgan, 1992, A dynamic origin for asymmetric
subsidence and geoid anomalies in the south
Atlantic Ocean? Eos, Trans. Am. Geophys.
Union, 73, 582.
Stein, C. A. & S. Stein, 1994,
Comparison of plate and asthenospheric flow
models for the thermal evolution of oceanic
lithosphere, Geophys. Res. Lett., 21,
updated 15th November, 2009 | http://www.mantleplumes.org/Energetics.html | 13 |
58 | Common Core Math Standards - 1st GradeMathScore aligns to the Common Core Math Standards for 1st Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience.
Operations and Algebraic ThinkingRepresent and solve problems involving addition and subtraction.
1. Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem.1 (Basic Word Problems )
2. Solve word problems that call for addition of three whole numbers whose sum is less than or equal to 20, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem.
Understand and apply properties of operations and the relationship between addition and subtraction.
3. Apply properties of operations as strategies to add and subtract.2 Examples: If 8 + 3 = 11 is known, then 3 + 8 = 11 is also known. (Commutative property of addition.) To add 2 + 6 + 4, the second two numbers can be added to make a ten, so 2 + 6 + 4 = 2 + 10 = 12. (Associative property of addition.) (Commutative Property 1 , Associative Property 1 , Addition Grouping )
4. Understand subtraction as an unknown-addend problem. For example, subtract 10 – 8 by finding the number that makes 10 when added to 8. Add and subtract within 20. (Inverse Equations 1 , Missing Term )
Add and subtract within 20.
5. Relate counting to addition and subtraction (e.g., by counting on 2 to add 2). (Understanding Addition )
6. Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 – 4 = 13 – 3 – 1 = 10 – 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 – 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13). (Fast Addition , Fast Addition Reverse , Fast Subtraction , Mixed Addition and Subtraction , Inverse Equations 1 )
Work with addition and subtraction equations.
7. Understand the meaning of the equal sign, and determine if equations involving addition and subtraction are true or false. For example, which of the following equations are true and which are false? 6 = 6, 7 = 8 – 1, 5 + 2 = 2 + 5, 4 + 1 = 5 + 2. (Understanding Equality )
8. Determine the unknown whole number in an addition or subtraction equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 + ? = 11, 5 = _ – 3, 6 + 6 = _. (Missing Term )
1 See Glossary, Table 1.
2 Students need not use formal terms for these properties.
Number and Operations in Base TenExtend the counting sequence.
1. Count to 120, starting at any number less than 120. In this range, read and write numerals and represent a number of objects with a written numeral. (Counting to 120 )
Understand place value.
2. Understand that the two digits of a two-digit number represent amounts of tens and ones. Understand the following as special cases: (Counting Squares to 100 )
a. 10 can be thought of as a bundle of ten ones — called a “ten.” (Counting Squares to 100 )
b. The numbers from 11 to 19 are composed of a ten and one, two, three, four, five, six, seven, eight, or nine ones. (Counting Squares to 100 , Understanding 11 to 19 )
c. The numbers 10, 20, 30, 40, 50, 60, 70, 80, 90 refer to one, two, three, four, five, six, seven, eight, or nine tens (and 0 ones). (Counting Squares to 100 )
3. Compare two two-digit numbers based on meanings of the tens and ones digits, recording the results of comparisons with the symbols >, =, and <. (Number Comparison to 100 )
Use place value understanding and properties of operations to add and subtract.
4. Add within 100, including adding a two-digit number and a one-digit number, and adding a two-digit number and a multiple of 10, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Understand that in adding two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten. (Basic Addition to 100 )
5. Given a two-digit number, mentally find 10 more or 10 less than the number, without having to count; explain the reasoning used. (Mental Addition and Subtraction to 100 )
6. Subtract multiples of 10 in the range 10-90 from multiples of 10 in the range 10-90 (positive or zero differences), using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. (Basic Subtraction to 100 )
Measurement and DataMeasure lengths indirectly and by iterating length units.
1. Order three objects by length; compare the lengths of two objects indirectly by using a third object.
2. Express the length of an object as a whole number of length units, by laying multiple copies of a shorter object (the length unit) end to end; understand that the length measurement of an object is the number of same-size length units that span it with no gaps or overlaps. Limit to contexts where the object being measured is spanned by a whole number of length units with no gaps or overlaps.
Tell and write time.
3. Tell and write time in hours and half-hours using analog and digital clocks. (Basic Telling Time )
Represent and interpret data.
4. Organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another.
GeometryReason with shapes and their attributes.
1. Distinguish between defining attributes (e.g., triangles are closed and three-sided) versus non-defining attributes (e.g., color, orientation, overall size) ; build and draw shapes to possess defining attributes.
2. Compose two-dimensional shapes (rectangles, squares, trapezoids, triangles, half-circles, and quarter-circles) or three-dimensional shapes (cubes, right rectangular prisms, right circular cones, and right circular cylinders) to create a composite shape, and compose new shapes from the composite shape.1
3. Partition circles and rectangles into two and four equal shares, describe the shares using the words halves, fourths, and quarters, and use the phrases half of, fourth of, and quarter of. Describe the whole as two of, or four of the shares. Understand for these examples that decomposing into more equal shares creates smaller shares.
1 Students do not need to learn formal names such as "right rectangular prism."
Learn more about our online math practice software. | http://www.mathscore.com/math/standards/Common%20Core/1st%20Grade/ | 13 |
99 | Algebra Help Math Sheet
An Engineers Quick Algebra Reference
Algebra Math Help
Arithmetic OperationsReturn to Top
The basic arithmetic operations are addition, subtraction, multiplication, and division. These operators follow an order of operation.
AdditionReturn to Top
Addition is the operation of combining two numbers. If more than two numbers are added this can be called summing. Addition is denoted by + symbol. The addition of zero to any number results in the same number. Addition of a negative number is equivalent to subtraction of the absolute value of that number.
SubtractionReturn to Top
Subtraction is the inverse of addition. The subtraction operator will reduce the first operand (minuend) by the second operand (subtrahend). Subtraction is denoted by - symbol.
MultiplicationReturn to Top
Multiplication is the product of two numbers and can be considered as a series of repeat addition. Multiplication of a negative number will result in the reciprocal of the number. Multiplication of zero always results in zero. Multiplication of one always results in the same number.
DivisionReturn to Top
Division is the method to determine the quotient of two numbers. Division is the opposite of multiplication. Division is the dividend divided by the divisor.
Arithmetic PropertiesReturn to Top
The main arithmetic properties are Associative, Commutative, and Distributive. These properties are used to manipulate expressions and to create equivalent expressions in a new form.
AssociativeReturn to Top
The Associative property is related to grouping rules. This rule allows the order of addition or multiplication operation on numbers to be changed and result the same value.
CommutativeReturn to Top
The Commutative property is related the order of operations. This rule applies to both addition and subtraction and allows the operands to change order within the same group.
DistributiveReturn to Top
The law of distribution allows operations in some cases to be broken down into parts. The property is applied when multiplication is applied to a group of division. This law is applied in the case of factoring.
Arithmetic Operations ExamplesReturn to Top
Exponent PropertiesReturn to Top
Properties of RadicalsReturn to Top
Properties of InequalitiesReturn to Top
Properties of Absolute ValueReturn to Top
Definition of Complex NumbersReturn to Top
Complex numbers are an extension of the real number system. Complex numbers are defined as a two dimension vector containing a real number and an imaginary number. The imaginary unit is defined as:
The complex number format where a is a real number and b is an imaginary number is defined as:
Unlike the real number system where all numbers are represented on a line, complex numbers are represented on a complex plane, one axis represents real numbers and the other axis represents imaginary numbers.
Properites of Complex NumbersReturn to Top
Definition of LogarithmsReturn to Top
A logarithm is a function that for a specific number returns the power or exponent required to raise a given base to equal that number. Some advantages for using logarithms are very large and very small numbers can be represented with smaller numbers. Another advantage to logarithms is simple addition and subtraction replace equivalent more complex operations. The definition of a logarithms is:
Definition of Natural LogReturn to Top
Definition of Common LogReturn to Top
Logarithm PropertiesReturn to Top
PolynomialsReturn to Top
A polynomial is an expression made up of variables, constants and uses the operators addition, subtraction, multiplication, division, and raising to a constant non negative power. Polynomials follow the form:
The polynomial is made up of coefficients multiplied by the variable raised to some integer power. The degree of a polynomial is determined by the largest power the variable is raised.
Quadratic EquationReturn to Top
A quadratic equation is a polynomial of the second order.
The solution of a quadratic equation is the quadratic formula. The quadratic formula is:
Common Factoring ExamplesReturn to Top
Square RootReturn to Top
The square root is a function where the square root of a number (x) results in a number (r) that when squared is equal to x.
Also the square root property is:
Absolute ValueReturn to Top
Completing the SquareReturn to Top
Completing the square is a method used to solve quadratic equations. Algebraic properties are used to manipulate the quadratic polynomial to change its form. This method is one way to derive the quadratic formula.
The steps to complete the square are:
- Divide by the coefficient a.
- Move the constant to the other side.
- Take half of the coefficient b/a, square it and add it to both sides.
- Factor the left side of the equation.
- Use the square root property.
- Solve for x.
Functions and Graphs
Expressions evaluated at incremental points then plotted on a Cartesian coordinate system is a plot or graph.
Constant FunctionReturn to Top
When a function is equal to a constant, for all values of x, f(x) is equal to the constant. The graph of this function is a straight line through the point (0,c).
Linear FunctionReturn to Top
A linear function follows the form:
The graph of this function has a slope of m and the y intercept is b. It passes through the point (0,b). The slope is defined as:
An addition form for linear functions is the point slope form:
Parabola or Quadratic FunctionReturn to Top
A parabola is a graphical representation of a quadratic function.
The graph of a parabola in this form opens up if a>0 and opens down if a<0. The vertex of the parabola is located at:
Other forms of parabolas are:
The graph of a parabola in this form opens right if a>0 or opens left if a<0. The vertex of the parabola is located
CircleReturn to Top
The function of a circle follows the form:
Where the center of the circle is (h,k) and the radius of the circle is r.
EllipseReturn to Top
The function of an ellipse follows the form:
Where the center of the ellipse is (h,k)
HyperbolaReturn to Top
The function of a Hyperbola that opens right and left from the center follows the form:
The function of a Hyperbola that opens up and down from the center follows the form:
Where the center of the hyperbola is (h,k), with asymptotes that pass through the center with slopes of: | http://www.eeweb.com/toolbox/algebra-reference-sheet | 13 |
55 | In communications and electronic engineering, a transmission line is a specialized cable designed to carry alternating current of radio frequency, that is, currents with a frequency high enough that their wave nature must be taken into account. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas, distributing cable television signals, and computer network connections.
Ordinary electrical cables suffice to carry low frequency AC, such as mains power, which reverses direction 100 to 120 times per second (cycling 50 to 60 times per second). However, they cannot be used to carry currents in the radio frequency range or higher, which reverse direction millions to billions of times per second, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the power from reaching the destination. Transmission lines use specialized construction such as precise conductor dimensions and spacing, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. Types of transmission line include ladder line, coaxial cable, dielectric slabs, stripline, optical fiber, and waveguides. Although the wavelength at a specific frequency depends on the transmission media, it is always the case that higher frequency waves have shorter wavelengths. Transmission lines must be used when the frequency is high enough that the wavelength of the waves begins to approach the length of the cable used. To conduct energy at frequencies above the radio range, such as millimeter waves, infrared, and light, the waves become much smaller than the dimensions of the structures used to guide them, so transmission line techniques become inadequate and the methods of optics are used.
The theory of sound wave propagation is very similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are also used to build structures to conduct acoustic waves; and these are also called transmission lines.
Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin and Oliver Heaviside. In 1855 Lord Kelvin formulated a diffusion model of the current in a submarine cable. The model correctly predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885 Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations.
In many electric circuits, the length of the wires connecting the components can for the most part be ignored. That is, the voltage on the wire at a given time can be assumed to be the same at all points. However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire.
A common rule of thumb is that the cable or wire should be treated as a transmission line if the length is greater than 1/10 of the wavelength. At this length the phase delay and the interference of any reflections on the line become important and can lead to unpredictable behavior in systems which have not been carefully designed using transmission line theory.
The four terminal model
For the purposes of analysis, an electrical transmission line can be modelled as a two-port network (also called a quadrupole network), as follows:
In the simplest case, the network is assumed to be linear (i.e. the complex voltage across either port is proportional to the complex current flowing into it when there are no reflections), and the two ports are assumed to be interchangeable. If the transmission line is uniform along its length, then its behaviour is largely described by a single parameter called the characteristic impedance, symbol Z0. This is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, and about 300 ohms for a common type of untwisted pair used in radio transmission.
When sending power down a transmission line, it is usually desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched.
Some of the power that is fed into a transmission line is lost because of its resistance. This effect is called ohmic or resistive loss (see ohmic heating). At high frequencies, another effect called dielectric loss becomes significant, adding to the losses caused by resistance. Dielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating). The transmission line is modeled with a resistance (R) and inductance (L) in series with a capacitance (C) and conductance (G) in parallel. The resistance and conductance contribute to the loss in a transmission line.
The total loss of power in a transmission line is often specified in decibels per metre (dB/m), and usually depends on the frequency of the signal. The manufacturer often supplies a chart showing the loss in dB/m at a range of frequencies. A loss of 3 dB corresponds approximately to a halving of the power.
High-frequency transmission lines can be defined as those designed to carry electromagnetic waves whose wavelengths are shorter than or comparable to the length of the line. Under these conditions, the approximations useful for calculations at lower frequencies are no longer accurate. This often occurs with radio, microwave and optical signals, metal mesh optical filters, and with the signals found in high-speed digital circuits.
Telegrapher's equations
The Telegrapher's Equations (or just Telegraph Equations) are a pair of linear differential equations which describe the voltage and current on an electrical transmission line with distance and time. They were developed by Oliver Heaviside who created the transmission line model, and are based on Maxwell's Equations.
The transmission line model represents the transmission line as an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line:
- The distributed resistance of the conductors is represented by a series resistor (expressed in ohms per unit length).
- The distributed inductance (due to the magnetic field around the wires, self-inductance, etc.) is represented by a series inductor (henries per unit length).
- The capacitance between the two conductors is represented by a shunt capacitor C (farads per unit length).
- The conductance of the dielectric material separating the two conductors is represented by a shunt resistor between the signal wire and the return wire (siemens per unit length).
The model consists of an infinite series of the elements shown in the figure, and that the values of the components are specified per unit length so the picture of the component can be misleading. , , , and may also be functions of frequency. An alternative notation is to use , , and to emphasize that the values are derivatives with respect to length. These quantities can also be known as the primary line constants to distinguish from the secondary line constants derived from them, these being the propagation constant, attenuation constant and phase constant.
The line voltage and the current can be expressed in the frequency domain as
When the elements and are negligibly small the transmission line is considered as a lossless structure. In this hypothetical case, the model depends only on the and elements which greatly simplifies the analysis. For a lossless transmission line, the second order steady-state Telegrapher's equations are:
These are wave equations which have plane waves with equal propagation speed in the forward and reverse directions as solutions. The physical significance of this is that electromagnetic waves propagate down transmission lines and in general, there is a reflected component that interferes with the original signal. These equations are fundamental to transmission line theory.
If and are not neglected, the Telegrapher's equations become:
and the characteristic impedance is:
The solutions for and are:
The constants and must be determined from boundary conditions. For a voltage pulse , starting at and moving in the positive -direction, then the transmitted pulse at position can be obtained by computing the Fourier Transform, , of , attenuating each frequency component by , advancing its phase by , and taking the inverse Fourier Transform. The real and imaginary parts of can be computed as
where atan2 is the two-parameter arctangent, and
For small losses and high frequencies, to first order in and one obtains
Noting that an advance in phase by is equivalent to a time delay by , can be simply computed as
Input impedance of lossless transmission line
The characteristic impedance of a transmission line is the ratio of the amplitude of a single voltage wave to its current wave. Since most transmission lines also have a reflected wave, the characteristic impedance is generally not the impedance that is measured on the line.
For a lossless transmission line, it can be shown that the impedance measured at a given position from the load impedance is
where is the wavenumber.
In calculating , the wavelength is generally different inside the transmission line to what it would be in free-space and the velocity constant of the material the transmission line is made of needs to be taken into account when doing such a calculation.
Special cases
Half wave length
For the special case where where n is an integer (meaning that the length of the line is a multiple of half a wavelength), the expression reduces to the load impedance so that
for all . This includes the case when , meaning that the length of the transmission line is negligibly small compared to the wavelength. The physical significance of this is that the transmission line can be ignored (i.e. treated as a wire) in either case.
Quarter wave length
For the case where the length of the line is one quarter wavelength long, or an odd multiple of a quarter wavelength long, the input impedance becomes
Matched load
Another special case is when the load impedance is equal to the characteristic impedance of the line (i.e. the line is matched), in which case the impedance reduces to the characteristic impedance of the line so that
for all and all .
For the case of a shorted load (i.e. ), the input impedance is purely imaginary and a periodic function of position and wavelength (frequency)
For the case of an open load (i.e. ), the input impedance is once again imaginary and periodic
Stepped transmission line
A stepped transmission line is used for broad range impedance matching. It can be considered as multiple transmission line segments connected in series, with the characteristic impedance of each individual element to be Z0,i. The input impedance can be obtained from the successive application of the chain relation
where is the wave number of the ith transmission line segment and li is the length of this segment, and Zi is the front-end impedance that loads the ith segment.
Because the characteristic impedance of each transmission line segment Z0,i is often different from that of the input cable Z0, the impedance transformation circle is off centered along the x axis of the Smith Chart whose impedance representation is usually normalized against Z0.
Practical types
Coaxial cable
Coaxial lines confine virtually all of the electromagnetic wave to the area inside the cable. Coaxial lines can therefore be bent and twisted (subject to limits) without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them. In radio-frequency applications up to a few gigahertz, the wave propagates in the transverse electric and magnetic mode (TEM) only, which means that the electric and magnetic fields are both perpendicular to the direction of propagation (the electric field is radial, and the magnetic field is circumferential). However, at frequencies for which the wavelength (in the dielectric) is significantly shorter than the circumference of the cable, transverse electric (TE) and transverse magnetic (TM) waveguide modes can also propagate. When more than one mode can exist, bends and other irregularities in the cable geometry can cause power to be transferred from one mode to another.
The most common use for coaxial cables is for television and other signals with bandwidth of multiple megahertz. In the middle 20th century they carried long distance telephone connections.
A microstrip circuit uses a thin flat conductor which is parallel to a ground plane. Microstrip can be made by having a strip of copper on one side of a printed circuit board (PCB) or ceramic substrate while the other side is a continuous ground plane. The width of the strip, the thickness of the insulating layer (PCB or ceramic) and the dielectric constant of the insulating layer determine the characteristic impedance. Microstrip is an open structure whereas coaxial cable is a closed structure.
- Main article : Stripline
A stripline circuit uses a flat strip of metal which is sandwiched between two parallel ground planes. The insulating material of the substrate forms a dielectric. The width of the strip, the thickness of the substrate and the relative permittivity of the substrate determine the characteristic impedance of the strip which is a transmission line.
Balanced lines
A balanced line is a transmission line consisting of two conductors of the same type, and equal impedance to ground and other circuits. There are many formats of balanced lines, amongst the most common are twisted pair, star quad and twin-lead.
Twisted pair
Twisted pairs are commonly used for terrestrial telephone communications. In such cables, many pairs are grouped together in a single cable, from two to several thousand. The format is also used for data network distribution inside buildings, but the cable is more expensive because the transmission line parameters are tightly controlled.
Quad, star quad
Quad is four-conductor cable used sometimes for two circuits, as in 4-wire telephony, and other times for a single circuit, called star quad, a balanced circuit for audio signals. All four conductors are twisted together around the cable axis. In the quad format, each pair uses non-adjacent conductors. For star quad, two non-adjacent conductors are terminated together at both ends of the cable, and the other two conductors are also terminated together.
Interference picked up by the cable arrives as a virtually perfect common mode signal, which is easily removed by coupling transformers. Because the conductors are always the same distance from each other, cross talk is reduced relative to cables with two separate twisted pairs.
The combined benefits of twisting, differential signalling, and quadrupole pattern give outstanding noise immunity, especially advantageous for low signal level applications such as long microphone cables, even when installed very close to a power cable. The disadvantage is that star quad, in combining two conductors, typically has double the capacitance of similar two-conductor twisted and shielded audio cable. High capacitance causes increasing distortion and greater loss of high frequencies as distance increases.
Twin-lead consists of a pair of conductors held apart by a continuous insulator.
Lecher lines
Lecher lines are a form of parallel conductor that can be used at UHF for creating resonant circuits. They are a convenient practical format that fills the gap between lumped components (used at HF/VHF) and resonant cavities (used at UHF/SHF).
Single-wire line
Unbalanced lines were formerly much used for telegraph transmission, but this form of communication has now fallen into disuse. Cables are similar to twisted pair in that many cores are bundled into the same cable but only one conductor is provided per circuit and there is no twisting. All the circuits on the same route use a common path for the return current (earth return). There is a power transmission version of single-wire earth return in use in many locations.
Waveguides are rectangular or circular metallic tubes inside which an electromagnetic wave is propagated and is confined by the tube. Waveguides are not capable of transmitting the transverse electromagnetic mode found in copper lines and must use some other mode. Consequently, they cannot be directly connected to cable and a mechanism for launching the waveguide mode must be provided at the interface.
Optical fiber
Optical fiber is a solid transparent fiber of glass or polymer that carries an optical signal. Optical fiber is a variety of waveguide. Optical fiber transmission lines form the backbone of modern terrestrial communications networks due to their low cost, low loss, and high signal bandwidth (high data rate).
General applications
Signal transfer
Electrical transmission lines are very widely used to transmit high frequency signals over long or short distances with minimum power loss. One familiar example is the down lead from a TV or radio aerial to the receiver.
Pulse generation
Transmission lines are also used as pulse generators. By charging the transmission line and then discharging it into a resistive load, a rectangular pulse equal in length to twice the electrical length of the line can be obtained, although with half the voltage. A Blumlein transmission line is a related pulse forming device that overcomes this limitation. These are sometimes used as the pulsed power sources for radar transmitters and other devices.
Stub filters
If a short-circuited or open-circuited transmission line is wired in parallel with a line used to transfer signals from point A to point B, then it will function as a filter. The method for making stubs is similar to the method for using Lecher lines for crude frequency measurement, but it is 'working backwards'. One method recommended in the RSGB's radiocommunication handbook is to take an open-circuited length of transmission line wired in parallel with the feeder delivering signals from an aerial. By cutting the free end of the transmission line, a minimum in the strength of the signal observed at a receiver can be found. At this stage the stub filter will reject this frequency and the odd harmonics, but if the free end of the stub is shorted then the stub will become a filter rejecting the even harmonics.
Acoustic transmission lines
An acoustic transmission line is the acoustic analog of the electrical transmission line, typically thought of as a rigid-walled tube that is long and thin relative to the wavelength of sound present in it.
Solutions of the Telegrapher's Equations as Circuit Components
||This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: Poor style. (June 2012)|
The solutions of the telegrapher's equations can be inserted directly into a circuit as components. The circuit in the top figure implements the solutions of the telegrapher's equations.
The bottom circuit is derived from the top circuit by source transformations. It also implements the solutions of the telegrapher's equations.
- The symbols: in the source book have been replaced by the symbols : in the preceding two equations.
The ABCD type two-port gives and as functions of and . Both of the circuits above, when solved for and as functions of and yield exactly the same equations.
In the bottom circuit, all voltages except the port voltages are with respect to ground and the differential amplifiers have unshown connections to ground. An example of a transmission line modeled by this circuit would be a balanced transmission line such as a telephone line. The impedances Z(s), the voltage dependent current sources (VDCSs) and the difference amplifiers (the triangle with the number "1") account for the interaction of the transmission line with the external circuit. The T(s) blocks account for delay, attenuation, dispersion and whatever happens to the signal in transit. One of the T(s) blocks carries the forward wave and the other carries the backward wave. The circuit, as depicted, is fully symmetric, although it is not drawn that way. The circuit depicted is equivalent to a transmission line connected from to in the sense that , , and would be same whether this circuit or an actual transmission line was connected between and . There is no implication that there are actually amplifiers inside the transmission line.
Every two-wire or balanced transmission line has an implicit (or in some cases explicit) third wire which may be called shield, sheath, common, Earth or ground. So every two-wire balanced transmission line has two modes which are nominally called the differential and common modes. The circuit shown on the bottom only models the differential mode.
In the top circuit, the voltage doublers, the difference amplifiers and impedances Z(s) account for the interaction of the transmission line with the external circuit. This circuit, as depicted, is also fully symmetric, and also not drawn that way. This circuit is a useful equivalent for an unbalanced transmission line like a coaxial cable or a micro strip line.
These are not the only possible equivalent circuits.
See also
Part of this article was derived from Federal Standard 1037C.
- Ernst Weber and Frederik Nebeker, The Evolution of Electrical Engineering, IEEE Press, Piscataway, New Jersey USA, 1994 ISBN 0-7803-1066-7
- Syed V. Ahamed, Victor B. Lawrence, Design and engineering of intelligent communication systems, pp.130-131, Springer, 1997 ISBN 0-7923-9870-X.
- Lampen, Stephen H. (2002). Audio/Video Cable Installer's Pocket Guide. McGraw-Hill. pp. 32, 110, 112. ISBN 0071386211.
- Rayburn, Ray (2011). Eargle's The Microphone Book: From Mono to Stereo to Surround - A Guide to Microphone Design and Application (3 ed.). Focal Press. pp. 164–166. ISBN 0240820754.
- McCammon, Roy, SPICE Simulation of Transmission Lines by the Telegrapher's Method, retrieved 22 Oct 2010
- William H. Hayt (1971). Engineering Circuit Analysis (second ed.). New York, NY: McGraw-Hill. ISBN 070273820 Check
|isbn=value (help)., pp. 73-77
- John J. Karakash (1950). Transmission Lines and Filter Networks (First ed.). New York, NY: Macmillan., p. 44
- Steinmetz, Charles Proteus (August 27, 1898), "The Natural Period of a Transmission Line and the Frequency of lightning Discharge Therefrom", The Electrical World: 203–205
- Grant, I. S.; Phillips, W. R., Electromagnetism (2nd ed.), John Wiley, ISBN 0-471-92712-0
- Ulaby, F. T., Fundamentals of Applied Electromagnetics (2004 media ed.), Prentice Hall, ISBN 0-13-185089-X
- "Chapter 17", Radio communication handbook, Radio Society of Great Britain, 1982, p. 20, ISBN 0-900612-58-4
- Naredo, J. L.; Soudack, A. C.; Marti, J. R. (Jan 1995), "Simulation of transients on transmission lines with corona via the method of characteristics", IEE Proceedings. Generation, Transmission and Distribution. (Morelos: Institution of Electrical Engineers) 142 (1), ISSN 1350-2360
Further reading
|Wikimedia Commons has media related to: Transmission lines|
- Annual Dinner of the Institute at the Waldorf-Astoria. Transactions of the American Institute of Electrical Engineers, New York, January 13, 1902. (Honoring of Guglielmo Marconi, January 13, 1902)
- Avant! software, Using Transmission Line Equations and Parameters. Star-Hspice Manual, June 2001.
- Cornille, P, On the propagation of inhomogeneous waves. J. Phys. D: Appl. Phys. 23, February 14, 1990. (Concept of inhomogeneous waves propagation — Show the importance of the telegrapher's equation with Heaviside's condition.)
- Farlow, S.J., Partial differential equations for scientists and engineers. J. Wiley and Sons, 1982, p. 126. ISBN 0-471-08639-8.
- Kupershmidt, Boris A., Remarks on random evolutions in Hamiltonian representation. Math-ph/9810020. J. Nonlinear Math. Phys. 5 (1998), no. 4, 383-395.
- Pupin, M., U.S. Patent 1,541,845, Electrical wave transmission.
- Transmission line matching. EIE403: High Frequency Circuit Design. Department of Electronic and Information Engineering, Hong Kong Polytechnic University. (PDF format)
- Wilson, B. (2005, October 19). Telegrapher's Equations. Connexions.
- John Greaton Wöhlbier, ""Fundamental Equation" and "Transforming the Telegrapher's Equations". Modeling and Analysis of a Traveling Wave Under Multitone Excitation.
- Agilent Technologies. Educational Resources. Wave Propagation along a Transmission Line. Edutactional Java Applet.
- Qian, C., Impedance matching with adjustable segmented transmission line. J. Mag. Reson. 199 (2009), 104-110.
- Transmission Line Parameter Calculator
- Interactive applets on transmission lines
- SPICE Simulation of Transmission Lines | http://en.wikipedia.org/wiki/Transmission_line | 13 |
82 | The final evolutionary stage for larger stars, in which they have exhausted their thermonuclear fuel and radiate relic heat. Neutron stars are extremely dense and are supported by neutron degeneracy pressure.
A neutron star is the remnants of a dead supergiant. When the star collapses, the negatively charged electrons are combined with the positively charged protons, which forms neutral neutrons. Thus the name, neutron star. This small dense ball of matter, contains all the mass and magnetic field of the star's core.
Stars born with about 8 to 20 times the mass of the Sun blast most of their material into interstellar space in titanic explosions, leaving only their crushed, dense cores, called neutron stars. Neutron stars are named after their composition: neutrons. In a star with a core that is 1.4 to 3 times the mass of the Sun, the core collapses so completely that electrons and protons combine to form neutrons. A full bathtub of neutron-star material (instead of water) would weigh as much as two Mount Everests. A neutron star is about 10-15 miles (16-24 km) in diameter, with a liquid neutron core and a crust of solid iron. Some neutron stars, called pulsars, spin rapidly (from once a second to several hundred times per second) and generate powerful magnetic fields.
Gravitationally collapsed star of very small dimensions and enormously high density, composed mainly of neutrons that may be the core remnant of a supernova.
A cold star, about 20 miles in diameter. One of the ways a star can spend its old age. Exclusion principle repulsion among its neutrons balances the pull of gravity Photon - A messenger particle that carries the electromagnetic force. The photon is a particle of light and all other forms of electromagnetic radiation (gamma rays, X rays, radio waves, etc.)
An extremely dense star comprised mainly of neutrons, endpoint of the life of a massive star which has exploded as a supernova. Under huge gravitational forces electrons have been compressed into protons and produced neutrons. A typical neutron star has about 3 times the mass of the Sun but a radius of only 10 kilometres. Fast spinning neutron stars can be observed as pulsars. If the Sun were to become a neutron star it would have a diameter of only 20 km.
Neutron stars are the super dense remains of massive stars, and they are often what is left behind after a supernova explosion.
An object only tens of miles across, but greater in mass than the Sun.
The imploded core of a massive star sometimes produced by a supernova explosion. Neutron stars typically have a mass 1.4 times the mass of the Sun, and a radius of about 5 miles. Neutron stars can be observed as pulsars.
A star that has collapsed to the point where it is supported against gravity by neutron degeneracy.
The remnant core of a massive star after a supernova explosion. It is extremely dense. Though its diameter is only about 15 kilometers, its mass is about 1.4 times that of the Sun.
A dead star comprised primarily of neutron-degenerate gas. Typically, such objects have a mass similar to or slightly larger than that of the Sun, but a diameter of about 10 miles. As in the case of electron-degenerate stars, or white dwarfs, more massive neutron stars are smaller, and when they become smaller than the Schwarschild limit, cannot exist as stable objects. Depending upon the nature of the gas in the core of a neutron star (which may, to a certain extent, involve physics beyond definite current knowledge), the maximum mass of a neutron star is between 2 and 3 Solar masses. The density of such a star is several trillions of times the density of water, the gravity is several billions of times the gravity of the Earth, and the escape velocity is a substantial fraction of the speed of light. Rotating neutron stars may produce pulsars. Neutron stars which have material dumped on them by a companion may produce X-ray bursts, or even more exotic phenomena associated with rapidly spinning accretion disks, surrounding the star.
A star of extremely high density composed almost entirely of neutrons.
Cold, degenerate, compact star in which nuclear fuels have been exhausted and pressure support against gravity is provided be the pressure of neutrons.
A celestial body hypothesized to occur in a terminal stage of stellar evolution, essentially consisting of a super dense mass of neutrons and having a powerful gravitational attraction from which only neutrinos and high-energy photons can escape, thus rendering the body invisible except to x-ray detection.
A very small, dense star that is so tightly packed together that the protons and electrons have been compressed to form neutrons.
A dense ball of neutrons that remains at the core of a star after a supernova explosion has destroyed the rest of the star. Typical neutron stars are about 20 km across, and contain more mass than the Sun.
A star that has collapsed under its own gravity, with vast gravitational and magnetic forces. It is called a neutron star because with that much gravity, protons fuse with electrons to form neutrons, so the star is almost entirely composed of neutrons.
A compact star with a radius of about 10 km and a mass of about 1.5 times that of our Sun. A neutron star internally supports itself against gravity by pressure from the strong nuclear force between neutrons, which are uncharged elementary particles commonly found in the nuclei of atoms.
a collapsed star with such high gravity that its atoms are packed together to the point that there is no room for electrons to orbit their nucleii
a compact star in which the weight of the star is carried by the pressure of free neutrons
a compressed, very dense ball of matter formed when a giant star explodes after its nuclear fuel runs out
a dead star in which gravity has squeezed all of the matter into the size of a small city
a dead star that has lost most of its material in an explosion
a dying star
a gigantic nucleus, with the mass of the Sun
a kind of collapsed star that is immensely dense and is made mostly of neutrons
a less dense form of a collapsed old star, but still has a density of more than a billion tons per teaspoon of material
an extremely dense and compact star that has undergone gravitational collapse to such an extent that much of the material has been compressed into neutrons
a product of great explosion of a red star, called super nova
a small, but extraordinarily dense object that is the remnants of a star has run out of fuel and collapsed
a star made entirely out of neutrons, as the name suggests
a star that has collapsed to a very dense soup of neutrons
a star that has exhausted its nuclear energy and suffered gravitational collapse to form a region of a high density of neutrons
a stellar object which consists of a gigantic nucleus composed of neutrons only
a very small but dense star, the size of a mountain but as massive as a star like the Sun
a very small, super-dense star which is composed mostly of tightly-packed neutrons
see related section] a rapidly spinning, extremely dense star composed of mainly neutrons.
the remnant of a high-mass star. The gravity of these stars is strong enough to knock the electrons out of their orbit and into the nucleus of the atoms. There, they form with protons to form neutrons. The structure of the nucleus is strong enough to resist any further gravitational collapse
the imploded core of a massive star remaining after a supernova explosion. Contains about the mass of the Sun in less than a trillionth of the Sun's volume.
A cold star, supported by the exclusion principle repulsion between neutrons.
A compact star consisting predominantly of neutrons. Neutron stars have masses in the range of about one to three solar masses and sizes of around 12 miles. Their density is comparable to that of atomic nuclei (i.e., about 100 to 1,000 trillion times the density of water).
A giant ball of neutrons (particles found in the nuclei of atoms). Neutron stars are very dense, only ten or twenty kilometres across, but more about 1.4 to 3 × the mass of our Sun. They are formed in supernova explosions.
a small, extremely dense star composed mostly of neutrons, or the remains of a supernova explosion.
An extremely dense collapsed star consisting mainly of neutrons. A neutron star is what often remains after the supernova explosion of a massive star.
One of the possible end-points of a star. A neutron star is very dense, with the mass of about 1.4 Suns contained in a sphere with a radius of about 10 km.
A giant ball of neutrons (particles found in the nuclei of atoms). Neutron stars are very dense, only ten or twenty kilometres across, but more massive than our Sun. They are formed in supernova explosions (see below).
A type of very dense star formed after a supernova. The Earth and beyond
A type of star which is very old, having cooled off and stopped nuclear fusion reactions. When gravity pulls the star down on itself, the electrons and protons are squeezed together, leaving just neutrons. The star is then supported against gravity by "neutron degeneracy pressure" (no two neutrons can be in the same place at the same time). These are produced when a star is too heavy to be a white dwarf, but not heavy enough to turn into a Black Hole.
The collapsed core of a massive star remaining after a supernova explosion. Usually, the core will have a mass of 1.4 times that of the Sun and a diameter of about 10 miles. Because matter is compressed so tightly, negatively charged electrons and positively charged protons are forced together. The resulting star consists of a core of superfluid neutrons and superconducting protons encased by a solid crystalline crust.
A collapsed, extremely dense star (i.e., a billion tons per cubic centimeter) consisting almost entirely of neutrons; the final state of a star about twice as massive as the sun
A small, extremely dense star made primarily of neutrons, with a radius of approximately 10 kilometers.
A compact stellar object containing roughly the mass of the Sun, but compressed into an object about 10 miles across. Such objects have nearly the same mass density as a nucleus, and are composed mainly of neutrons. More.
Star made of degenerate neutrons, covered by a thin crust of heavy elements -- remnant of massive star after Type II or Ib SN. Neutron stars have masses between 1.4 and about 3 times that of the Sun.
When a star runs out of fuel it collapses, because it can no longer support its own weight. If the star has a lower mass than 1.4 solar masses, the Chandresekhar limit, it will become a white dwarf. If the mass is greater than this then the pressure is too great for a white dwarf to support itself. Then a supernova explosion will occur, blowing away some of the mass of the star. If the star is still sufficiently massive it will overcome electron degeneracy pressure. Protons are forced to become neutrons, by absorbing electrons and emitting neutrinos. The neutron degeneracy pressure is then all that supports the star. The neutron star is born. The neutron star has an incredible density of about 1014 grams per cubic centimetre. Pulsars are spinning neutron stars. If the mass is too great for the neutron degeneracy pressure to support it then a black hole will be formed.
The remnant of a supernova, a neutron star is supported by degenerate neutrons and has a mass near the Chandrasekhar limit. Neutron stars spin rapidly and, if aligned just so, are visible as pulsating radio sources or pulsars.
The imploded core of a massive star produced by a supernova explosion. Neutron stars typically have 1.4 times the mass of the Sun contained in radius of about 5 miles. According to astronomer and author Frank Shu, “a sugar cube of neutron-star stuff on Earth would weigh as much as all of humanity!” Neutron stars can be observed from Earth as pulsars.
A star (approximately sun-sized or larger), a remnant of a supernova explosion, in which gravity has caused all matter to collapse to a giant nucleus, composed only of neutrons. The collapse is also expected to greatly amplify any magnetic field present in the pre-collapse star, as well as speed up enormously any rate of rotation. It is believed that pulsars, pulsating radio sources with very precise pulsation periods, are neutron stars of radius about 10 km and rotation period about 1 second. Their magnetic axis spins and beams radio waves, in a way similar to the way a lighthouse beams its light. We detect pulsars when the Earth is in one of the directions swept by the beams.
The core of a supergiant star which has collapsed during a supernova explosion so much that it consists entirely of neutrons. Most stars between 8 and 60 solar masses end their lives like this usually producing a neutron star with a mass of about 1.4 solar masses. Neutron stars are only 10 kilometres across and have an incredible density - a teaspoon of neutron star material would have a mass of hundreds of millions of tonnes. See also Black hole.
A dense ball of neutrons that remains after a supernova has destroyed the rest of the star. Typically neutron stars are about 20km across, and contain more mass than the Sun. [More Info: Field Guide
A very dense stellar remnant, formed when a star with a remnant bigger than about 1.4 solar masses explodes in a supernova. They spin rapidly.
The imploded core of a massive star produced by a supernova explosion. (typical mass of 1.4 times the mass of the sun, radius of about 5 miles, density of a neutron.) According to astronomer and author Frank Shu, "A sugarcube of neutron-star stuff on Earth would weigh as much as all of humanity! This illustrates again how much of humanity is empty space." Neutron stars can be observed as pulsars.
The imploded core of a star between 1.4 and 3 times the mass of a sun produced by a supernova explosion.
A compressed core of an exploded star made up almost entirely of neutrons. Neutron stars have a strong gravitational field and some emit pulses of energy along their axis. These are known as pulsars.
the collapsed, extraordinarily dense, city-sized remnant of a high-mass star.
An object composed entirely of neutrons (neutral elementary particles). The object is little more than 10 miles across but has a mass somewhat larger than the Sun. It is produced when an intermediate-mass star dies in a supernova explosion. How can a star become a black hole
An astrophysical object that arises at the end of the lifetime of certain massive stars. A typical neutron star has the mass of several Suns crammed into a ball with a diameter about that of a city.
Neutron stars are also the cores of the stars which have exploded as supernovae. This only happens when the core is between one and a half to three solar masses. They are normally 10 kilometers in diameter and consist mainly of sub-atomic particles called neutrons. These stars are so dense that a teaspoonful would weigh about a billion tonnes. Neutron stars are also observed as pulsars, they are so called because they rotate rapidly and emit two beams of radio waves, which are detected as short pulses.
extremely compact and dense star, formed during the final evolution of a massive star. The matter in a Neutron Star is not in the ordinary physical state that we all know: the pressure of the concentrated matter is so high that the atoms "break", and protons and electrons merge forming a sea filled with neutrons.
A collapsed star of extremely high density. Generally these objects have slightly more mass than the Sun, but are only about 10 km in radius. A neutron star has intense gravity, and may also have an intense magnetic field and fast rotational component.
the collapsed core for an intermediate to high-mass star. The core is more than 1.4 solar masses but less than 3 solar masses and is about the diameter of a city. The pressure from degenerate neutrons prevents further collapse.
An extremely compact ball of neutrons formed from the central core of a collapsed star and having the mass of a star but a size smaller than the Earth's Moon.
An extremely compact ball of neutrons created from the central core of a star that collapsed under gravity during a supernova explosion. Neutron stars are extremely dense: they are only 10 kilometers or so in size, but have the mass of an average star (usually about 1.5 times more massive than our Sun). A neutron star that regularly emits pulses of radiation is known as a pulsar.
A small, highly dense star composed almost entirely of tightly packed neutrons. Radius about 10 kilometers.
Any of a class of extremely dense, compact stars thought to be composed primarily of neutrons; see pulsar
"Neutron Star," is a science fiction short story written by Larry Niven. It was originally published in the October 1966 issue of Worlds of If. It was later reprinted in Neutron Star, (New York: Ballantine, 1968, pp. 9-28, ISBN 0-345-29665-6), and Crashlander (New York: Ballantine, 1994, pp. 8-28, ISBN 0-345-38168-8). | http://metaglossary.com/meanings/470089/ | 13 |
108 | 2008/9 Schools Wikipedia Selection. Related subjects: Mathematics
Euclidean geometry is a mathematical system attributed to the Greek mathematician Euclid of Alexandria. Euclid's text Elements is the earliest known systematic discussion of geometry. It has been one of the most influential books in history, as much for its method as for its mathematical content. The method consists of assuming a small set of intuitively appealing axioms, and then proving many other propositions (theorems) from those axioms. Although many of Euclid's results had been stated by earlier Greek mathematicians, Euclid was the first to show how these propositions could be fit together into a comprehensive deductive and logical system.
The Elements begin with plane geometry, still taught in secondary school as the first axiomatic system and the first examples of formal proof. The Elements goes on to the solid geometry of three dimensions, and Euclidean geometry was subsequently extended to any finite number of dimensions. Much of the Elements states results of what is now called number theory, proved using geometrical methods.
For over two thousand years, the adjective "Euclidean" was unnecessary because no other sort of geometry had been conceived. Euclid's axioms seemed so intuitively obvious that any theorem proved from them was deemed true in an absolute sense. Today, however, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. It also is no longer taken for granted that Euclidean geometry describes physical space. An implication of Einstein's theory of general relativity is that Euclidean geometry is only a good approximation to the properties of physical space if the gravitational field is not too strong.
Euclidean geometry is an axiomatic system, in which all theorems ("true statements") are derived from a finite number of axioms. Near the beginning of the first book of the Elements, Euclid gives five postulates (axioms):
- Any two points can be joined by a straight line.
- Any straight line segment can be extended indefinitely in a straight line.
- Given any straight line segment, a circle can be drawn having the segment as radius and one endpoint as centre.
- All right angles are congruent.
- Parallel postulate. If two lines intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if extended far enough.
These axioms invoke the following concepts: point, straight line segment and line, side of a line, circle with radius and centre, right angle, congruence, inner and right angles, sum. The following verbs appear: join, extend, draw, intersect. The circle described in postulate 3 is tacitly unique. Postulates 3 and 5 hold only for plane geometry; in three dimensions, postulate 3 defines a sphere.
Postulate 5 leads to the same geometry as the following statement, known as Playfair's axiom, which also holds only in the plane:
Through a point not on a given straight line, one and only one line can be drawn that never meets the given line.
Postulates 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures, and these assertions are of a constructive nature: that is, we are not only told that certain things exist, but are also given methods for creating them with no more than a compass and an unmarked straightedge. In this sense, Euclidean geometry is more concrete than many modern axiomatic systems such as set theory, which often assert the existence of objects without saying how to construct them, or even assert the existence of objects that cannot be constructed within the theory.
Strictly speaking, the constructs of lines on paper etc are models of the objects defined within the formal system, rather than instances of those objects. For example a Euclidean straight line has no width, but any real drawn line will.
The Elements also include the following five "common notions":
- Things that equal the same thing also equal one another.
- If equals are added to equals, then the wholes are equal.
- If equals are subtracted from equals, then the remainders are equal.
- Things that coincide with one another equal one another.
- The whole is greater than the part.
Euclid also invoked other properties pertaining to magnitudes. 1 is the only part of the underlying logic that Euclid explicitly articulated. 2 and 3 are "arithmetical" principles; note that the meanings of "add" and "subtract" in this purely geometric context are taken as given. 1 through 4 operationally define equality, which can also be taken as part of the underlying logic or as an equivalence relation requiring, like "coincide," careful prior definition. 5 is a principle of mereology. "Whole", "part", and "remainder" beg for precise definitions.
In the 19th century, it was realized that Euclid's ten axioms and common notions do not suffice to prove all of theorems stated in the Elements. For example, Euclid assumed implicitly that any line contains at least two points, but this assumption cannot be proved from the other axioms, and therefore needs to be an axiom itself. The very first geometric proof in the Elements, shown in the figure on the right, is that any line segment is part of a triangle; Euclid constructs this in the usual way, by drawing circles around both endpoints and taking their intersection as the third vertex. His axioms, however, do not guarantee that the circles actually intersect, because they are consistent with discrete, rather than continuous, space. Starting with Moritz Pasch in 1882, many improved axiomatic systems for geometry have been proposed, the best known being those of Hilbert, George Birkhoff, and Tarski.
To be fair to Euclid, the first formal logic capable of supporting his geometry was that of Frege's 1879 Begriffsschrift, little read until the 1950s. We now see that Euclidean geometry should be embedded in first-order logic with identity, a formal system first set out in Hilbert and Wilhelm Ackermann's 1928 Principles of Theoretical Logic. Formal mereology began only in 1916, with the work of Lesniewski and A. N. Whitehead. Tarski and his students did major work on the foundations of elementary geometry as recently as between 1959 and his death in 1983.
The parallel postulate
To the ancients, the parallel postulate seemed less obvious than the others; verifying it physically would require us to inspect two lines to check that they never intersected, even at some very distant point, and this inspection could potentially take an infinite amount of time. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the Elements: the first 28 propositions he presents are those that can be proved without it.
Many geometers tried in vain to prove the fifth postulate from the first four. By 1763 at least 28 different proofs had been published, but all were found to be incorrect. In fact the parallel postulate cannot be proved from the other four: this was shown in the 19th century by the construction of alternative ( non-Euclidean) systems of geometry where the other axioms are still true but the parallel postulate is replaced by a conflicting axiom. One distinguishing aspect of these systems is that the three angles of a triangle do not add to 180°: in hyperbolic geometry the sum of the three angles is always less than 180° and can approach zero, while in elliptic geometry it is greater than 180°. If the parallel postulate is dropped from the list of axioms without replacement, the result is the more general geometry called absolute geometry.
Treatment using analytic geometry
The development of analytic geometry provided an alternative method for formalizing geometry. In this approach, a point is represented by its Cartesian (x,y) coordinates, a line is represented by its equation, and so on. In the 20th century, this fit into David Hilbert's program of reducing all of mathematics to arithmetic, and then proving the consistency of arithmetic using finitistic reasoning. In Euclid's original approach, the Pythagorean theorem follows from Euclid's axioms. In the Cartesian approach, the axioms are the axioms of algebra, and the equation expressing the Pythagorean theorem is then a definition of one of the terms in Euclid's axioms, which are now considered to be theorems. The equation
defining the distance between two points P = (p,q) and Q = (r,s) is then known as the Euclidean metric, and other metrics define non-Euclidean geometries.
As a description of physical reality
Euclid believed that his axioms were self-evident statements about physical reality.
This led to deep philosophical difficulties in reconciling the status of knowledge from observation as opposed to knowledge gained by the action of thought and reasoning. A major investigation of this area was conducted by Immanuel Kant in The Critique of Pure Reason.
However, Einstein's theory of general relativity shows that the true geometry of spacetime is non-Euclidean geometry. For example, if a triangle is constructed out of three rays of light, then in general the interior angles do not add up to 180 degrees due to gravity. A relatively weak gravitational field, such as the Earth's or the sun's, is represented by a metric that is approximately, but not exactly, Euclidean. Until the 20th century, there was no technology capable of detecting the deviations from Euclidean geometry, but Einstein predicted that such deviations would exist. They were later verified by observations such as the observation of the slight bending of starlight by the Sun during a solar eclipse in 1919, and non-Euclidean geometry is now, for example, an integral part of the software that runs the GPS system. It is possible to object to the non-Euclidean interpretation of general relativity on the grounds that light rays might be improper physical models of Euclid's lines, or that relativity could be rephrased so as to avoid the geometrical interpretations. However, one of the consequences of Einstein's theory is that there is no possible physical test that can do any better than a beam of light as a model of geometry. Thus, the only logical possibilities are to accept non-Euclidean geometry as physically real, or to reject the entire notion of physical tests of the axioms of geometry, which can then be imagined as a formal system without any intrinsic real-world meaning.
Because of the incompatibility of the Standard Model with general relativity, and because of some recent empirical evidence against the former, both theories are now under increased scrutiny, and many theories have been proposed to replace the former and, in many cases, the latter as well. ( GUTs are the only example of post-Standard Model theories that do not tackle general relativity.) The disagreements between the two theories come from their claims about space-time, and it is now accepted that physical geometry must describe space-time rather than merely space. While Euclidean geometry, the Standard Model and general relativity are all compatible with any number of spatial dimensions and any specification as to which of these if any are compactified (see string theory), and while all bar Euclidean geometry (which does not distinguish space from time) insist on exactly one temporal dimension, proposed alternatives, none of which are yet part of scientific consensus, differ significantly in their predictions or lack thereof as to these details of space-time. The disagreements between the conventional physical theories concern whether space-time is Euclidean (since quantum field theory in the standard model is built on the assumption that it is) and on whether it is quantized. Few if any proposed alternatives deny that space-time is quantized, with the quanta of length and time are respectively the Planck length and the Planck time. However, which geometry to use - Euclidean, Riemannian, de Stitter, anti de Stitter and some others - is a major point of demarcation between them. Many physicists expect some Euclidean string theory to eventually become the Theory Of Everything, but their view is by no means unanimous, and in any case the future of this issue is unpredictable. Regarding how if at all Euclidean geometry will be involved in future physics, what is uncontroversial is that the definition of straight lines will still be in terms of the path in a vacuum of electromagnetic radiation (including light) until gravity is explained with mathematical consistency in terms of a phenomenon other than space-time curvature, and that the test of geometrical postulates (Euclidean or otherwise) will lie in studying how these paths are affected by phenomena. For now, gravity is the only known relevant phenomenon, and its effect is uncontroversial (see gravitational lensing).
Conic sections and gravitational theory
Apollonius and other Ancient Greek geometers made an extensive study of the conic sections — curves created by intersecting a cone and a plane. The (nondegenerate) ones are the ellipse, the parabola and the hyperbola, distinguished by having zero, one, or two intersections with infinity. This turned out to facilitate the work of Galileo, Kepler and Newton in the 17th Century, as these curves accurately modeled the movement of bodies under the influence of gravity. Using Newton's law of universal gravitation, the orbit of a comet around the Sun is
- an ellipse, if it is moving too slowly for its position (below escape velocity), in which case it will eventually return;
- a parabola, if it is moving with exact escape velocity (unlikely), and will never return because the curve reaches to infinity; or
- a hyperbola, if it is moving fast enough (above escape velocity), and likewise will never return.
In each case the Sun will be at one focus of the conic, and the motion will sweep out equal areas in equal times.
Galileo experimented with objects falling small distances at the surface of the Earth, and empirically determined that the distance travelled was proportional to the square of the time. Given his timing and measuring apparatus, this was an excellent approximation. Over such small distances that the acceleration of gravity can be considered constant, and ignoring the effects of air (as on a falling feather) and the rotation of the Earth, the trajectory of a projectile will be a parabolic path.
Later calculations of these paths for bodies moving under gravity would be performed using the techniques of analytical geometry (using coordinates and algebra) and differential calculus, which provide straightforward proofs. Of course these techniques had not been invented at the time that Galileo investigated the movement of falling bodies. Once he found that bodies fall to the earth with constant acceleration (within the accuracy of his methods), he proved that projectiles will move in a parabolic path using the procedures of Euclidean geometry.
Similarly, Newton used quasi–Euclidean proofs to demonstrate the derivation of Keplerian orbital movements from his laws of motion and gravitation.
Centuries later, one of the first experimental measurements to support Einstein's general theory of relativity, which postulated a non-Euclidean geometry for space, was the orbit of the planet Mercury. Kepler described the orbit as a perfect ellipse. Newtonian theory predicted that the gravitational influence of other bodies would give a more complicated orbit. But eventually all such Newtonian corrections fell short of experimental results; a small perturbation remained. Einstein postulated that the bending of space would precisely account for that perturbation.
Euclidean geometry is a first-order theory. That is, it allows statements such as those that begin as "for all triangles ...", but it is incapable of forming statements such as "for all sets of triangles ...". Statements of the latter type are deemed to be outside the scope of the theory.
We owe much of our present understanding of the properties of the logical and metamathematical properties of Euclidean geometry to the work of Alfred Tarski and his students, beginning in the 1920s. Tarski proved his axiomatic formulation of Euclidean geometry to be complete in a certain sense: there is an algorithm which, for every proposition, can show it to be either true or false. Gödel's incompleteness theorems showed the futility of Hilbert's program of proving the consistency of all of mathematics using finitistic reasoning. Tarski's findings do not violate Gödel's theorem, because Euclidean geometry cannot describe a sufficient amount of arithmetic for the theorem to apply.
Although complete in the formal sense used in modern logic, there are things that Euclidean geometry cannot accomplish. For example, the problem of trisecting an angle with a compass and straightedge is one that naturally occurs within the theory, since the axioms refer to constructive operations that can be carried out with those tools. However, centuries of efforts failed to find a solution to this problem, until Pierre Wantzel published a proof in 1837 that such a construction was impossible.
Absolute geometry, first identified by Bolyai, is Euclidean geometry weakened by omission of the fifth postulate, that parallel lines do not meet. Of strength intermediate between absolute geometry and Euclidean are geometries derived from Euclid's by alterations of the parallel postulate that can be shown to be consistent by exhibiting models of them. For example, geometry on the surface of a sphere is a model of elliptical geometry. Another weakening of Euclidean geometry is affine geometry, first identified by Euler, which retains the fifth postulate unmodified while weakening postulates three and four in a way that eliminates the notions of angle (whence right triangles become meaningless) and of equality of length of line segments in general (whence circles become meaningless) while retaining the notions of parallelism as an equivalence relation between lines, and equality of length of parallel line segments (so line segments continue to have a midpoint).
- Ceva's theorem
- Heron's formula
- Nine-point circle
- Pythagorean theorem
- Tartaglia's formula
- Menelaus's theorem
- Angle bisector theorem | http://schools-wikipedia.org/wp/e/Euclidean_geometry.htm | 13 |
81 | In the case of a unicode object, we mean a sequence of any of the millions of Unicode characters.
We’ll more fully define string in What Does Python mean by “String?”. We’ll show the syntax for strings in Writing a String in Python and the factory functions that create strings in String Factory Functions.
We’ll look at the standard sequence operators and how they apply to strings in Operating on String Data. We’ll focus on a unique string operator, %, in % : The Message Formatting Operator. We’ll look at some built-in functions in Built-in Functions for Strings. We’ll cover the comparison operators in Comparing Two Strings – Alphabetical Order. There are numerous string methods that we’ll look at in Methods Strings Perform.
There is a string module, but it isn’t heavily used. We’ll look at it briefly in Modules That Help Work With Strings. Part 8 of the Python Library Reference [PythonLib] contains 11 modules that work with strings; we won’t dig into these deeply. We’ll return to the most important string module in Text Processing and Pattern Matching : The re Module.
We’ll look at some common patterns of string processing in Some Common Processing Patterns.
A string is an immutable sequence of characters. Let’s look at this definition in detail.
Here’s a depiction of a string of 10 characters. The Python value is "syncopated". Each character has a position that identifies the character in the string.
We get string objects from external devices like the keyboard, files or the network. We present strings to users either as files or on the GUI display. The print statement converts data to a string before showing it to the user. This means that printing a number really involves converting the number to a string of digits before printing the string of digit characters.
Often, our program will need to examine input strings to be sure they are valid. We may be checking a string to see if it is a legal name for a day of the week. Or, we may do a more complex examination to confirm that it is a valid time. There are a number of validations we may have to perform.
Our computations may involve numbers derived from input strings. Consequently, we may have to convert input strings to numbers or convert numbers to strings for presentation.
We looked at strings quickly in Strings – Anything Not A Number. A String is a sequence of characters. We can create strings as literals or by using any number of factory functions.
When writing a string literal, we need to separate the characters that are in the string from the surrounding Python values. String literals are created by surrounding the characters with quotes or apostrophes. We call this surrounding punctuation quote characters, even though we can use apostrophes as well as quotes.
There are several variations on the quote characters that we use to define string literals.
Single-quote. A single-quoted string uses either the quote (") or apostrophe ( ' ). A basic string must be completed on a single line. Both of these examples are essentially the same string.
Triple-quote. Multi-line strings can be enclosed in triple quotes or triple apostrophes. A multi-line string continues on until the matching triple-quote or triple-apostrophe.
Here some examples of creating strings.
a= "consultive" apos= "Don't turn around." quote= '"Stop," he said.' doc_1= """fastexp(n,p) -> integer Raises n to the p power, where p is a positive integer. :param n: a number :param p: an integer power """ novel= '''"Just don't shoot," Larry said.'''
A simple string.
A string using ". It has an ' inside it.
A string using '. It has two " inside it.
This a six-line string.
Use repr(doc_1) to see how many lines it has. Better, use doc_1.splitlines().
This is a one-line string with both " and ' inside it.
Non-Printing Characters – Really! [How can it be a character and not have a printed representation?]
ASCII has a few dozen characters that are intended to control devices or adjust spacing on a printed document.
There are a few commonly-used non-printing characters: mostly tab and newline. One of the most common escapes is \n which represents the non-printing newline character that appears at the end of every line of a file in GNU/Linux or MacOS. Windows, often, will use a two character end-of-line sequence encoded as \r\c. Most of our editing tools quietly use either line-ending sequence.
These non-printing characters are created using escapes. A table of escapes is provided below. Normally, the Python compiler translates the escape into the appropriate non-printing character.
Here are a couple of literal strings with a \n character to encode a line break in the middle of the string.
'The first message.\nFollowed by another message.' "postmarked forestland\nconfigures longitudes."
Python supports a broad selection of \ escapes. These are printed representations for unprintable ASCII characters. They’re called escapes because the \ is an escape from the usual meaning of the following character. We have very little use for most of these ASCII escapes. The newline (\n), backslash (\), apostrophe (') and quote (") escapes are handy to have.
Escapes Become Single Characters
We type two (or more) characters to create an escape, but Python compiles this into a single character in our program.
In the most common case, we type \n and Python translates this into a single ASCII character that doesn’t exist on our keyboard.
Since \ is always the first of two (or more) characters, what if we want a plain-old \ as the single resulting character? How do we stop this escape business?
The answer is we don’t. When we type \\, Python puts a single \ in our program. Okay, it’s clunky, but it’s a character that isn’t used all that often. The few times we need it, we can cope. Further, Python has a “raw” mode that permits us to bypass these escapes.
|\'||Apostrophe (\ ')|
|\a||Audible Signal; the ASCII code called BEL. Some OS’s translate this to a screen flash or ignore it completely.|
|\b||Backspace (ASCII BS)|
|\f||Formfeed (ASCII FF). On a paper-based printer, this would move to the top of the next page.|
|\n||Linefeed (ASCII LF), also known as newline. This would move the paper up one line.|
|\r||Carriage Return (ASCII CR). On a paper based printer, this returned the print carriage to the start of the line.|
|\t||Horizontal Tab (ASCII TAB)|
|\ooo||An ASCII character with the given octal value. The ooo is any octal number.|
|\xhh||An ASCII character with the given hexadecimal value. The x is required. The hh is any hex number.|
We can also use a \\ at the end of a line, which means that the end-of-line is ignored. The string continues on the next line, skipping over the line break. Here’s an example of a single string that was so long had to break it into multiple lines.
"A manuscript so long \ that it takes more than one \ line to finish it."
Why would we have this special dangling-backslash? Compare the previous example with the following.
"""A manuscript so long that it takes more than one line to finish it."""
What’s the difference? Enter them both into IDLE to see what Python displays. One string represent a single line of data, where the other string represents three lines of data. Since the \ escapes the meaning of the newline character, it vanishes from the string. This gives us a very fine degree of control over how our output looks.
Also note that adjacent strings are automatically put together to make a longer string. We won’t make much use of this, but it something that you may encounter when reading someone else’s programs.
"syn" "opti" "cal" is the same as "synoptical".
Unicode Strings. If a u or U is put in front of the string (for example, u"unicode"), this indicates a Unicode string. Without the u, it is an ASCII string. Unicode refers to the Universal Character Set; each character requires from 1 to 4 bytes of storage. ASCII is a single-byte character set; each of the 256 ASCII characters requires a single byte of storage. Unicode permits any character in any of the languages in common use around the world.
For the thousands of Unicode characters that are not on our computer keyboards, a special \uxxxx escape is provided. This requires the four digit Unicode character identification. For example, “日本” is made up of Unicode characters U+65e5 and U+672c. In Python, we write this string as u'\u65e5\u672c'.
Here’s an example that shows the internal representation and the easy-to-read output of this string. This will work nicely if you have an appropriate Unicode font installed on your computer. If this doesn’t work, you’ll need to do an operating system upgrade to get Unicode support.
>>> ch= u’\u65e5\u672c’ >>> ch u’\u65e5\u672c’ >>> print ch 日本
There are a variety of Unicode encoding schemes. The most common encodings make some basic assumptions about the typical number of bytes for a character. For example, the UTF-16 codes are most efficient when most of characters actually use two bytes and there are relatively few exceptions. The UTF-8 codes work well on the internet where many of the protocols expect only the US ASCII characters. In the rare event that we need to control this, the codecs module provides mechanisms for encoding and decoding Unicode strings.
See http://www.unicode.org for more information.
Raw Strings. If an r or R is put in front of the string (for example, r"raw\nstring"), this indicates a raw string. This is a string where the backslash characters (\) are not interpreted by the Python compiler but are left as is. This is handy for Windows files names, which contain \. It is also handy for regular expressions that make heavy use of backslashes. We’ll look at these in Text Processing and Pattern Matching : The re Module.
"\n" is an escape that’s converted to a single unprintable newline character.
r"\n" is two characters, \ and n .
There is some subtlety to the factory functions which create strings. We have two conflicting interpretations of “string representation” of an object. For simple data types, like numbers, the string version of the number is the sequence of characters. However, for more complex objects, we often want something “readable” that doesn’t contain every nuance of the object’s value. Consequently, we have two factory functions for strings: str() and repr().
You can make use of repr() to get a detailed view of a specific sequence to help you in debugging. This can, for example, reveal non-printing characters in a character string.
>>> a= str(355.0/113.0) >>> a '3.14159292035' >>> hex(48813) '0xbead'
The repr() function also converts an object to a string. However, repr() creates a string suitable for use as Python source code. For simple numeric types, it’s not terribly interesting. For more complex, types, however, it reveals details of their structure.
In Python 2, the repr() function can also be invoked using the backtick (`), also called accent grave.
This ` syntax is not used much and will be removed from Python 3.
Here are several version of a very long string, showing a number of representations.
>>> a="""a very ... long symbolizer ... on multiple lines""" >>> repr(a) "'a very\\nlong symbolizer\\non multiple lines'" >>> a 'a very\nlong symbolizer\non multiple lines' >>> print a a very long symbolizer on multiple lines
The unicode() function converts an encoded str to an internal Unicode String. There are a number of ways of encoding a Unicode string so that it can be placed into email or a database. The default encoding is called 'UTF-8' with 'strict' error handling. Choices for errors are 'strict', 'replace' and 'ignore'. Strict raises an exception for unrecognized characters, replace substitutes the Unicode replacement character (\uFFFD) and ignore skips over invalid characters. The codecs and unicodedata modules provide more functions for working with explicit Unicode conversions.
>>> unicode('\xe6\x97\xa5\xe6\x9c\xac','utf-8') u'\u65e5\u672c'
The above example shows the UTF-8 encoding for 日本 as a string of bytes and as a Python Unicode string. The Unicode string character numbers (u65e5 and u672c) are easier to read as a Unicode string than they are in the UTF-8 encoding.
There are a number of operations that apply to string objects. Since strings (even a string of digits) isn’t a number, these operations do simple manipulations on the sequence of characters.
If you need to do arithmetic operations on strings, you’ll need to convert the string to a number using one of the number factory functions int(), float(), long() or complex(). See Functions are Factories (really!) for more information on these functions. Once you have a proper number, you can do arithmetic on it and then convert the result back into a string using str(). We’ll return to this later. For now, we’ll focus on manipulating strings.
There are three operations (+, *, [ ]) that work with strings and a unique operation % that can be performed only with strings. The % is so sophisticated, that we’ll devote a separate section to just that operator.
The + Operator. The + operator creates a new string as the concatenation of two strings. A resulting string is created by gluing the two argument strings together.
>>> "hi " + 'mom' 'hi mom'
The * Operator. The * operator between strings and numbers (number * string or string * number) creates a new string that is a number of repetitions of the argument string.
>>> print 2*"way " + "cool!" way way cool!
The [ ] operator. The [ ] operator can extract a single character or a substring from the string. There are two forms for picking items or slices from a string.
The single item operation is string [ index ]. Items are numbered from 0 to len(string)-1. Items are also numbered in reverse from -len(string) to -1.
The slice operation is string [ start : end ]. Characters from start to end-1 are chosen to create a new string as a slice of the original string; there will be end - start characters in the resulting string. If start is omitted it is the beginning of the string (position 0), if end is omitted it is the end of the string (position -1).
For more information on how the numbering works for the [ ] operator, see Numbering from Zero.
The meaning of
Note that the characters are part of the syntax. When you read other Python documents, you will see characters used in two senses: as syntax and also to mark optional parts of the syntax.
In the statement summaries in this book, we use 〈 and 〉 for optional elements in an effort to reduce the confusion that can be caused by having two meanings for characters.
However, for function and method summaries, the publishing software uses [ and ], which look enough like [ and ] to lead to potential confusion.
Here are some examples of picking out individual items or creating a slice composed of several items.
>>> s="artichokes" >>> s 't' >>> s[:5] 'artic' >>> s[5:] 'hokes' >>> s[2:3] 't' >>> s[2:2] ''
The last example, s[2:2], shows an empty slice. Since the slice is from position 2 to position 2-1, there can’t be any characters in that range; it’s a kind of contradiction to ask for characters 2 through 1. Python politely returns an empty string, which is a sensible response to the expression.
Recall that string positions are also numbered from right to left using negative numbers. s[-2] is the next-to-last character. We can, then, say things like the following to work from the right-hand side instead of the left-hand side.
>>> s="artichokes" >>> s[-2] 'e' >>> s[-3:-1] 'ke' >>> s[-1:1] ''
The % operator is used to format a message. The argument values are a template string and a tuple of individual values. The operator creates a new string by folding together two elements:
First we’ll look at a quick example, then we’ll look at the real processing rules behind this operator. This example has a template string and two values that are used to create a resulting string.
>>> "Today's temp is %dC (%dF)" % (3, 37.39) "Today's temp is 3C (37F)"
The template string is "Today's temp is %dC (%dF)". The two values are (3, 37.39). You can see that the values were used to replace the %d conversion specification.
Our template string, then, was really in five parts:
Rules of the Game. There are two important rules for working with formatting strings.
The first rule of the % conversion is that our template string is a mixture of literal text and conversion specifications. The conversion specifications begin with % and end with a letter. They’re generally pretty short, and the % makes them stand out from the literal text. Everything outside the % conversions are just transcribed into the message.
The second rule is that each % conversion specification takes another item from the tuple that has the values to be inserted into the message. The first conversion uses the first value of the tuple, the second conversion uses the second value from the tuple. If the number of conversion specifications and items don’t match exactly, you get an error and your program stops running.
What if we want to have a % in our output? What if we were doing something like "The return is 12.5%"? To include a single % in the resulting string, we use %% in the template.
Conversions: Five Things to Control. There are a number of things we need to control when converting numbers to strings.
It is important to note that these conversion specifications match the C programming language printf() function specifications. Since Python is not C, there are some nuances of C-language conversions which don’t make much sense for Python programs. The specification rules are still here, however, to make it easy to convert a C program into Python.
To provide tremendous flexibility, each conversion specification has the following elements. In this syntax summary, note that the 〈 and 〉‘s indicate that all of specification elements except the final code letter are optional.
% 〈 flags 〉 〈 width 〉 〈 . precision 〉 code
Here are some common examples of these conversion specifications. We’ll look at each part of the conversion specification separately. Then we’ll reassemble the entire message template from literal text and conversion specifications.
Here are some examples. We’ll look at these quickly before digging into details.
>>> "%d" % ( 12.345, ) '12' >>> "%.2f" % ( 12.345, ) '12.35' >>> "%-12s" % ( 12.345, ) '12.345 ' >>> "%#x" % ( 12.345, ) '0xc'
The %d conversion is appropriate for decimal integers, so the floating-point number is converted to an integer when it is displayed. The %.2f conversion is for floating-point numbers, and rounds to the number of positions (2 in this case). The %-12s conversion is appropriate for strings, so the floating-point number is turned into a string, then left-justified in a 12-position string. The %#x conversion shows the hex value of an integer, so the floating-point number is converted to the integer 12, then displayed in Python hexadecimal notation (0xc)
Flags. The optional flags can have any combination of the following values:
Width. The width specifies the total number of characters for the field, including signs and decimal points. If omitted, the width is just big enough to hold the output number.
In order to fill up the width, spaces (or zeros) will be added to the number. The flags of - or 0 determine precisely how the spaces are allocated or if zeros should be used.
Look at the following variations on %d conversion.
>>> "%d" % 12 '12' >>> "%5d" % 12 ' 12' >>> "%-5d" % 12 '12 ' >>> "%05d" % 12 '00012'
If a * is used for the width, an item from the tuple of values is used as the width of the field. "%*i" % ( 3, d1 ) uses the value 3 from the tuple as the field width and d1 as the value to convert to a string. This makes a single template string somewhat more flexible.
Precision. The precision (which must be preceded by a .) is the number of digits to the right of the decimal point. For string conversions, the precision is the maximum number of characters to be printed, longer strings will be truncated.
This is how we can control the run-on decimal expansion problem. We use conversions like "%.3f" % aNumber to convert the number to a string with the desired number of decimal places.
>>> 2.3 2.2999999999999998 >>> "%.3f" % 2.3 '2.300'
If a * is used for the precision, an item from the tuple of values is used as the precision of the conversion. A * can be used for width also.
For example, "%*.*f" % ( 6, 2, avg ) uses the value 6 from the tuple as the field width, the value 2 from the tuple as the precision and avg as the value. This makes a single template string somewhat more flexible.
Long and Short Indicators. The standard conversion rules also permit a long or short indicator: l or h. These are tolerated by Python, but have no effect. They reflect internal representation considerations for C programming, not external formatting of the data. For programs that were converted from C, this may show up in a template string, and will be gracefully ignored by Python.
Conversion Code. The one-letter code specifies the conversion to perform. The codes are listed below.
|%||Creates a single %. Use %% to put a single % in the resulting string.|
|c||Convert a single character string. Also converts an integer to the an ASCII character.|
|s||Apply the str function and include that string.|
|r||Apply the repr function and include that string.|
|i or d||Convert a number to an integer and include the string representation of that integer.|
|u||This is a numeric conversion that’s here for compatibility with legacy C programs.|
|o||Use the oct function and include that octal string.|
|x or X||Use the hex function and include that hexadecimal string. The %x version produces lowercase letters; the %X version produces uppercase letters.|
|e or E||Convert the number to a float and use scientific notation. The %e version produces |plusmn|d.ddde|plusmn|xx; the %E version produces |plusmn|d.ddde|plusmn|xx, for example 6.02E23.|
|f or F||Convert the number to a float and include the standard string representation of that number.|
|g or G||“Generic” floating-point format. Use %e or %E for very small or very large exponents, otherwise use an %f conversion.|
Examples. Here are some examples of messages with more complex templates.
"%i: %i win, %i loss, %6.3f" % (count,win,loss,float(win)/loss)
This example does four conversions: three simple integer and one floating-point that provides a width of 6 and 3 digits of precision. -0.000 is the expected format. The rest of the string is literally included in the output.
"Spin %3i: %2i, %s" % (spin,number,color)
This example does three conversions: one number is converted into a field with a width of 3, another converted with a width of 2, and a string is converted, using as much space as the string requires.
"Win rate: %.4f%%" % ( win/float(spins) )
This example has one conversion, but includes a literal % , which is created by using %% in the template.
The following built-in functions are relevant to working with strings and characters.
For character code manipulation, there are three related functions: chr(), ord() and unichr(). chr() returns the ASCII character that belongs to an ASCII code number. unichr() returns the Unicode character that belongs to a Unicode number. ord() transforms an ASCII character to its ASCII code number, or transforms a Unicode character to its Unicode number.
Return the number of items of a set, sequence or mapping.
>>> len("restudying") 10 >>> len(r"\n") 2 >>> len("\n") 1
Note that a raw string (r"\n") doesn’t use escapes; this is two characters. An ordinary string ("n") interprets the escapes; this is one unprintable character.
Return a string of one character with ordinal i; .
This is the standard US ASCII conversion, chr(65) == 'A'.
Return the integer ordinal of a one character string. For an ordinary character, this will be the US ASCII code. ord('A') == 65.
For a Unicode character this will be the Unicode number. ord(u'\u65e5') == 26085.
Return a Unicode string of one character with ordinal i; . This is the Unicode mapping, defined in http://www.unicode.org/.
>>> unichr(26085) u’\u65e5’ >>> print unichr(26085) 日 >>> ord(u’\u65e5’) 26085`
Note that min() and max() also apply to strings. The min() function will return the character closest that front of the alphabet. The max() function returns the character closest to the back of the alphabet.
>>> max('restudying') 'y' >>> min('restudying') 'd'
The standard comparisons ( <, <=, >, >=, ==, !=) apply to strings. These comparisons use character-by-character comparison rules for ASCII or Unicode. This will keep things in the expected alphabetical order.
The rules for alphabetical order include a few nuances that may cause some confusion for newbies.
Here are some examples.
>>> 'hello' < 'world' True >>> 'inordinate' > 'in' True >>> '1' < '11' True >>> '2' < '11' False
These rules for alphabetical order are much simpler than, for example, the American Library Association Filing Rules. Those rules are quite complex and have a number of exceptions and special cases.
There are two additional string comparisons: in and not in. These check to see if a single character string occurs in a longer string. The in operator returns a True when the character is found in the string, False if the character is not found. The not in operator returns True if the character is not found in the string.
>>> "i" in 'microquake' True >>> "i" in 'formulates' False
There are three statements that are associated with strings: the various kinds of assignment statements and the for statement deals with sequences of all kinds. Additionally the print statement is associated with strings.
The Assignment Statements. The basic assignment statement applies a new variable name to a string object. This is the expected meaning of assignment.
The augmented assignments – += and *= – work as expected. a += 'more data' is the same as a = a + `more data'. Recall that a string is immutable; something like a += 'more data' must create a new string from the old value of a and the string 'more data'.
The for Statement. Since a string is a sequence, the for statement will visit each character of the string.
for c in "lobstering": print c
The print Statement. The print must convert each expression to a string before writing the strings to the standard output file.
Generally, this is what we expect. Sometimes, however, this has odd features. For example, when we do print abs(-5), the argument is an integer and the result is an integer. This integer result is converted to the obvious string value and printed.
If we do print abs, what happens? We’re not applying the abs() function to an argument. We’re just converting the function to a string and printing it.
All Python objects have a string representation of some kind. Therefore, the print statement is capable of printing anything.
A string object has a number of method functions. These can be separated into three groups:
Transformations. The following transformation functions create a new string from an existing string.
Create a copy of the original string with only its first character capitalized.
"vestibular".capitalize() creates "Vestibular".
Create a copy of the original string centered in a new string of length width. Padding is done using spaces.
"subheading".center(15) creates ' subheading '. With explicit spaces, this is
Return an decoded version of the original string. The default encoding is the current default string encoding, usually ‘ascii’. errors may be given to set a different error handling scheme; default is ‘strict’ meaning that encoding errors raise a ValueError. Other possible values for errors are ‘ignore’ and ‘replace’.
Section 4.9.2 of the Python library defines the various decodings available. One of the codings is called “base64”, which mashes complex strings of bytes into ordinary letters, suitable for transmission on the internet.
'c3RvY2thZGluZw=='.decode('base64') creates 'stockading'.
Return an encoded version of the original string. The default encoding is the current default string encoding, usually ‘ascii’. errors may be given to set a different error handling scheme; default is ‘strict’ meaning that encoding errors raise a ValueError. Other possible values for errors are ‘ignore’ and ‘replace’.
Section 4.9.2 of the Python library defines the various decodings available. We can use the Unicode UTF-16 code to make multi-byte Unicode characters.
'blathering'.encode('utf16') creates '\xff\xfeb\x00l\x00a\x00t\x00h\x00e\x00r\x00i\x00n\x00g\x00'.
Return a new string which is the concatenation of the original strings in the sequence. The separator between elements is the string object that does the join.
" and ".join( ["ships","shoes","sealing wax"] ) creates 'ships and shoes and sealing wax'.
Return a copy of the original string left justified in a string of length width. Padding is done using spaces on the right.
"reclasping".ljust(15) creates 'reclasping '. With more visible spaces, this is
Return a copy of the original string converted to lowercase.
"SuperLight".lower() creates 'superlight'.
Return a copy of the original string with leading whitespace removed. This is often used to clean up input.
" precasting \n".lstrip() creates 'precasting \n'.
Return a copy of the original string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced.
The most common use is "$HOME/some/place".replace("$HOME","e:/book") replaces the "$HOME" string to create a new string 'e:/book/some/place'.
Once in a while, we’ll need to replace just the first occurance of some target string, allowing us to do something like the following: 'e:/book/some/place'.replace( 'e', 'f', 1 ).
Return a copy of the original string right justified in a string of length width. Padding is done using spaces on the left.
"fulminates".rjust(15) creates ' fulminates'.
With more visible spaces, this is
Return a copy of the original string with trailing whitespace removed. This has an obvious symmetry with lstrip().
" precasting \n".rstrip() creates ' precasting'.
Return a copy of the original string with leading and trailing whitespace removed. This combines lstrip() and rstrip() into one handy package.
" precasting \n".strip() creates 'precasting'.
Return a titlecased version of the original string. Words start with uppercase characters, all remaining cased characters are lowercase.
For example, "hello world".title() creates 'Hello World'.
Accessors. The following methods provide information about a string.
Return the number of occurrences of substring sub in a string. If the optional arguments start and end are given, they are interpreted as if you had said string [ start : end ].
For example "hello world".count("l") is 3.
Return True if the string ends with the specified suffix, otherwise return False. With optional start, or end, the test is applied to string [ start : end ].
"pleonastic".endswith("tic") creates True.
Return the lowest index in the string where substring sub is found. If optional arguments start and end are given, than string [ start : end ] is searched. Return -1 on failure.
"rediscount".find("disc") returns 2; "postlaunch".find("not") returns -1.
Like find() but raise ValueError when the substring is not found.
See The Unexpected : The try and except statements for more information on processing exceptions.
Return True if the string starts with the specified prefix, otherwise return False. With optional start, or end, test string [ start : end ].
"E:/programming".startswith("E:") is True.
Parsers. The following methods create another kind of object, usually a sequence, from a string.
Return a list of the words in the string the string, using sep as the delimiter string. If maxsplit is given, at most maxsplit splits are done. If sep is not specified, any whitespace string is a separator.
We can use this to do things like aList= "a,b,c,d".split(','). We’ll look at the resulting sequence object closely in Flexible Sequences : the list.
Return a list of the lines in the string, breaking at line boundaries. Line breaks are not included in the resulting list unless keepends is given and True. This method can help us process a file: a file can be looked at as if it were a giant string punctuated by n characters.
We can break up a string into individual lines using statements like lines= "two linesnof data".splitlines().
Here’s another example of using some of the string methods and slicing operations.
temp= raw_input("temperature: ") if temp.isdigit(): unit= raw_input("units [C or F]: ") else: unit= temp[-1:] temp= temp[:-1] unit= unit.upper() if unit.startswith("C"): print temp, c2f(float(temp)) elif unit.startswith("F"): print temp, f2c(float(temp)) else: print "Units must be C or F"
Perhaps the most useful string-related module is the re module. The name is short because it is used so often in so many Python programs. However, it is a little too advanced to cover here. We’ll talk about it in Text Processing and Pattern Matching : The re Module.
The module named string has a number of public module variables which define various subsets of the ASCII characters. These definitions serve as a central, formal repository for facts about the character set. Note that there are general definitions, applicable to Unicode character sets, different from the ASCII definitions.
|string.letters:||All Letters; for many locale settings, this will be different from the ASCII letters|
|Lowercase Letters; for many locale settings, this will be different from the ASCII letters|
|All printable characters in the character set|
|All punctuation in the character set. For ASCII, this is !"#$%&'()*+,-./:;<=>?@[\]^_`|~|
|A collection of characters that cause spacing to happen. For ASCII this is \t\n\x0b\x0c\r⎵; Tab (HT), Newline (Line Feed, LF), Vertical Tab (VT), Carriage Return (CR) and space.|
You can use these for operations like the following. We often use this string classifiers to test input values we got from a user or read from a file. We use string.uppercase and string.digits in the examples below.
>>> import string >>> a= "some input" >>> a in string.uppercase False >>> n= "123-45" >>> for character in n: ... if character not in string.digits: ... print "Invalid character", character ... Invalid character -
There are a number of common design patterns for manipulating strings. These includes adding characters to a string, removing characters from a string and breaking a string into two strings. In some languages, these operations involve some careful planning. In Python, these operations are relatively simple and (hopefully) obvious.
Adding Characters To A String. We add characters to a string by creating a new string that is the concatenation of the original strings. For example:
>>> a="lunch" >>> a=a+"meats" >>> a 'lunchmeats'
Some programmers who have extensive experience in other languages will ask if creating a new string from the original strings is the most efficient way to accomplish this. Or they suggest that it would be “simpler” to allow mutable strings for this kind of concatenation. The short answer is that Python’s storage management makes this use if immutable strings the simplest and most efficient. We’ll discuss this in some depth in Sequence FAQ’s.
Removing Characters From A String. Sometimes we want to remove some characters from a string. Python encourages us to create a new string that is built from pieces of the original string. For example:
>>> s="black,thorn" >>> s = s[:5] + s[6:] >>> s 'blackthorn'
In this example, we dropped the sixth character (in position 5), ,. Recall that the positions are numbered from zero. Positions 0, 1 and 2 are the first three characters. Position 5 is the sixth character. Here’s how this example works.
In other languages, there are sophisticated methods to delete particular characters from a string. Again, Python makes this simpler by letting us create a new string from pieces of the old string.
Breaking a String at a Position. Often, we will break a string into two pieces around a punctuation mark. Python gives us a very handy way to do this.
>>> fn="nonprogrammerbook.rst" >>> dot= fn.rfind('.') >>> name= fn[:dot] >>> ext= fn[dot:] >>> name 'nonprogrammerbook' >>> ext '.rst'
We use the rfind() method to locate the right-most . in the file name. We can then break the string at this position. You can see Python’s standard interpretation: the position’s returned by find() or rfind() means that the named position is not included in the material to the left of the position.
Is Each Letter Unique?.
Given a ten-letter word, is each letter unique? Further, do the letters occur in alphabetical order?
Let’s say we have a 10-letter word in the variable w. We want to know if each letter occurs just once in the word. For example, “pathogenic” has each letter occurring just once. On the other hand, “pathologic”.
To determine if each letter is unique, we’ll need to extract each letter from the word, and then use the count() method function to determine if that letter occurs just once in the word.
Write a loop which will examine each letter in a word to see if the count of occurrences is just one or more than one. If all counts are one, this is a ten-letter word with 10 unique letters.
Here’s a batch of words to use for testing: patchworks, patentable, paternally, pathfinder, pathogenic.
The alphabetical order test is more difficult. In this case, we need to be sure that each letter comes before the next letter in the alphabet. We’re asking that w <= w <= w.... We can break this long set of comparisons down to a shorter expression that we can evaluate in a loop. We can use w <= w, and w <= w to examine each letter and its successor.
Write a loop to examine each character to determine if the letters of the word occur in alphabetical order. Words like “abhorrent” or “immortals” have the letters in alphabetical order.
Check Amount Writing.
Translate a number into the English phrase.
This example algorithm fragment is only to get you started. This shows how to pick off the digits from the right end of a number and assemble a resulting string from the left end of the string.
Note that the right-most two digits have special names, requiring some additional cases above and beyond the simplistic loop shown below. For example, 291 is “two hundred ninety one”, where 29 is “twenty nine”. The word for “2” changes, depending on the context.
As a practical matter, you should analyze the number by taking off three digits at a time, the expression (number % 1000) does this. You would then format the three digit number with words like “million”, “thousand”, etc.
English Words For An Amount, n
Set . This is the “tens counter” that shows what position we’re examining.
Loop. While .
Get Right Digit. Set , the remainder when divided by 10.
Make Phrase. Translate digit to a string from “zero” to “nine”. Translate tc to a string from “” to “thousand”. This is tricky because the “teens” are special, where the “hundreds” and “thousands” are pretty simple.
Assemble Result. Prepend digit string and tc string to the left end of the result string.
Next Digit. . Be sure to use the // integer division operator, or you’ll get floating-point results.
Increment tc by 1.
Result. Return result as the English translation of n.
This is similar to translating numbers to English. Instead we will translate them to Roman Numerals.
The Algorithm is similar to Check Amount Writing (above). You will pick off successive digits, using %10 and /10 to gather the digits from right to left.
The rules for Roman Numerals involve using four pairs of symbols for ones and five, tens and fifties, hundreds and five hundreds. An additional symbol for thousands covers all the relevant bases.
When a number is followed by the same or smaller number, it means addition. “II” is two 1’s = 2. “VI” is 5 + 1 = 6.
When one number is followed by a larger number, it means subtraction. “IX” is 1 before 10 = 9. “IIX” isn’t allowed, this would be “VIII”.
For numbers from 1 to 9, the symbols are “I” and “V”, and the coding works like this.
The same rules work for numbers from 10 to 90, using “X” and “L”. For numbers from 100 to 900, using the symbols “C” and “D”. For numbers between 1000 and 4000, using “M”.
Here are some examples. 1994 = MCMXCIV, 1956 = MCMLVI, 3888= MMMDCCCLXXXVIII
Analyze the following block of text. You’ll want to break into into words on whitespace boundaries. Then you’ll need to discard all punctuation from before, after or within a word.
What’s left will be a sequence of words composed of ASCII letters. Compute the length of each word, and produce the sequence of digits. (no word is 10 or more letters long.)
Compare the sequence of word lenghts with the value of math.pi.
Poe, E. Near a Raven Midnights so dreary, tired and weary, Silently pondering volumes extolling all by-now obsolete lore. During my rather long nap - the weirdest tap! An ominous vibrating sound disturbing my chamber's antedoor. "This", I whispered quietly, "I ignore".
This is based on http://www.cadaeic.net/cadenza.htm. | http://www.itmaybeahack.com/homepage/books/nonprog/html/p08_sequence/p08_c02_string.html | 13 |
311 | Illinois played a significant role in the Lewis and Clark Expedition because of the state of communications technology in 1803.
Originally, Lewis had planned to assemble the Expedition at St. Louis. The Louisiana Purchase agreement had been signed with Napoleon in May 1803 (antedated to April 30), and the treaty ratified by the Senate in October. Actual transfer of control, however, had to await the passing of word through channels from the French Emperor to the Spanish commandant at St. Louis, who still was responsible for the city.
In December 1803 Lewis requested permission for the Expedition to establish winter quarters on the west bank of the Mississippi. Despite the pending transfer the Spanish commandant at St. Louis refused for he had no official notice of the Treaty. Being unwilling to force the issue, Lewis crossed to the American side of the Mississippi and established camp near the mouth of the Wood River.
A 55-foot keelboat had been made to order in Pittsburgh, floated down the Ohio to its confluence with the Mississippi, and propelled up to St. Louis. It was then moved to the Wood River site.
Once established at Wood River Lewis and Clark set about completing their preparations for the journey in the spring. Lewis recruited men in the surrounding area to bring the size of the party up to 43 men, each suited to the rugged life he would lead during the venture, and each with useful skills. Military discipline was established. Two additional boats were constructedpirogues, or dugout canoes, each made from a single log 40-50 feet long and equipped with 6-7 oars. The extensive supplies required to sustain the men and to be used in dealing with the Indians were received, sorted, and stored.
Information was sought about the country and tribes they would encounter. The first five months of travel would take the Expedition to the villages of the Mandan Indians; this area was known, since other explorers as well as traders had traversed it. But from there on the Expedition would push into country that was only legendary and whose vastness was unknown.
The party left its Wood River camp on May 14, 1804.
Nothing marks the Wood River Camp area today but a Lewis and Clark historic marker in a small State park nearby. The original campsite has been washed away with the shifting of the rivers during the elapsed 160 years. Plans now are under way to enlarge the State park.
Illinois maintains another site with a bearing on the Lewis and Clark ExpeditionCahokia Mounds State Park, east of East St. Louis. Lewis made trips during the winter of 1803 to Cahokia and also to Kaskaskia and was able to recruit soldiers there for his Expedition.
It was near St. Louis that the Expedition members spent the winter of 1803-04, making the necessary preparations for their journey which began on May 14, 1804. It lasted a little over two years, and they returned to St. Louis on September 23, 1806. Both explorers remained there after their trip, and Clark's descendants still reside there.
Some of the sites associated with the Lewis and Clark Expedition, such as St. Charles and the Lewis and Clark State Park, can be visited today. Although many of the campsites have been located, most are in need of interpretive signs and markers.
The Jefferson National Expansion Memorial in St. Louis will feature Lewis and Clark, along with other great explorers, in memorializing the Nation's westward expansion. The Memorial will focus attention not only on the historic aspects of our westward expansion but also on the economic, social, and cultural effects brought about by this migration westward.
Along the Lewis and Clark Trail the variety and quality of Missouri's outdoor recreation potential are outstanding. The abundant opportunities for water-based recreation, the favorable climate and long vacation season, the picturesque landscape, the profusion of historic sites, and the strategic location for westbound travelers constitute a prime setting for the development of Missouri's Lewis and Clark Trail program.
There are 56 existing and 30 proposed points of recreation interest along the Lewis and Clark Trail in Missouri. Twenty-three areas provide water-based recreation and all of the areas proposed for development by State and Federal agencies on the Missouri River will provide water-based recreation areas.
Public recreation areas providing access to the Missouri River are generally lacking along the entire course through Missouri. This is partly due to insufficient funds which hamper recreation development in both the private and public sectors.
The recommended routing of a Lewis and Clark Trail Highway in Missouri is indicated on maps 1-4. Future development of recreation areas along the Trail should follow the existing plans of the Missouri Conservation Commission and the Corps of Engineers with modifications as indicated in this report.
It was near St. Louis, on the banks of the Wood (Du Bois) River in Illinois, opposite the mouth of the Missouri, that the captains prepared for their journey. During the winter of 1803-04 the two leaders purchased supplies, disciplined their men and readied their equipment.
The party left their Wood River Camp on Monday, May 14, 1804. Two days later, they arrived at St. Charles and there waited for Captain Lewis, who had been detained in St. Louis. The members of the Expedition were well treated by the French inhabitants of St. Charles and a ball was held in their honor. Some of the members enjoyed themselves too much and on May 17 Clark selected a detail for court martial. Three men were found guilty and the first of several floggings was administered. Lewis arrived on May 19. The Expedition left St. Charles on May 21, continuing up the Missouri River. The next day they came to a camp of Kickapoo Indians who traded four deer for two quarts of whiskey.
On May 25 the Expedition camped near La Charrette, a village of seven houses. Two nights later they camped at the mouth of the Gasconade River in a violent thunderstorm. The next day was spent in drying out their equipment.
The Expedition camped on Moreau Creek near Jefferson City on June 3. The following day Sergeant Ordway steered the keel boat too near the shore. The mast caught in a sycamore tree and snapped off. It "Broke verry Easy" said Ordway. Two Frenchmen came down the river on June 5, their fur-laden pirogues lashed together to make one boat. On June 7, the Expedition halted at a curious limestone rock. The place was alive with rattlesnakes and they killed several before investigating the rock which was embedded with red, white, and blue flint. The Indians had covered the rock with paintings of animals and inscriptions. On June 9 the Expedition reached a cliff of rocks called the Arrow Rock.
The Expedition met a party of Frenchmen coming down the river on two rafts loaded with furs on June 12. Among the party was a Mr. Dorion, who spoke several Indian languages. The captains persuaded him to join the Expedition and go with them to the Sioux nation. The boatmen began having difficulties with sunken snags and shifting sand bars. Heavy morning fogs often delayed the Expedition and violent winds forced them ashore at times. The muddy drinking water gave them boils and dysentery. On June 26 they reached the mouth of the Kansas River, and the future site of Kansas City.
On July 7 the Expedition passed the future site of St. Joseph and the next day camped near the mouth of Nodaway River. By this time there had been several cases of sunstroke. On July 12 one member of the Expedition was sentenced to 100 lashes for sleeping on post. On July 27 they passed the present Missouri-Iowa State line.
On the return trip in the fall of 1806 the journals do not give an accurate accounting of each overnight camp, but it is known that the Expedition did stop at some of the 1804 camp sites. The journey lasted for two years, four months, and nine days, and the Expedition arrived in St. Louis at 12:00 p.m. on Tuesday, September 23, 1806.
Beginning at St. Louis, where both Lewis and Clark remained after their return, there are several points of interest along the Trail for the present-day traveler to visit. At the corner of Walnut and Main, Lewis participated in the Louisiana Purchase ceremony, whereby the United States took over 1,172,000 square miles of territory. After the Expedition, Clark lived and died here in a house where the Chamber of Commerce building now stands, and he is buried in the huge Bellefontaine Cemetery.
The new Jefferson National Expansion Memorial at St. Louis will honor the Lewis and Clark Expedition and other epic explorations and achievements contributing to the westward expansion of the Nation. The Memorial is located here because much of the Nation's westward surge was channeled through St. Louis. Its strategic location, near the Ohio, Missouri, and Mississippi Rivers, made it the hub of mid-continental commerce and truly the "Gateway to the West." In addition to the 630-foot stainless steel gateway arch now under construction, there are plans for a visitor center and a large museum. The Memorial and Fort Clatsop in Oregon are the only Federally-owned areas associated with the Lewis and Clark Trail that are presently administered by the National Park Service.
Just west of St. Louis and across the Missouri River, the present-day traveler can visit the town of St. Charles, the only large town on the Missouri River in existence when the explorers went west. Many of the buildings were constructed about the time of the Louisiana Purchase. A few miles west of Babler State Park, the tavern visited by Lewis and Clark and lost for decades behind a river-built bank was rediscovered not too long ago. The town of Marthasville is very near the former site of La Charrette, which consisted of seven houses when the Expedition passed.
Few sites associated with the Expedition have been identified across the remainder of the State. One important site is the Lewis and Clark State Park, located near Winthrop across the river from Atchison, Kansas. Sugar Lake, located in the Park, was visited by Lewis and Clark and named "Goslin Lake" because of the large number of goslings feeding there.
These sites and others like them will be sought after by numerous Americans retracing the steps of Lewis and Clark. Some, such as St. Charles, are easy to locateothers such as the Wood River campsiteare not. Improvement of access and interpretation is needed to make them available to the public.
Missouri has other sites and attractions which would supplement those associated with the Lewis and Clark Expedition. Interesting stops along the river route include Daniel Boone's home, original grave site, and monument; William Ashley's grave; and Lexington, the famous Civil War battlefield. The Lewis and Clark Trail in Missouri roughly parallels the historic Santa Fe Trail from Arrow Rock to Independence. Routes taken by the Pony Express, the forty-niners, and the Oregon pioneers begin at the famous Missouri River jumping-off pointsSt. Joseph, Independence, and Westport (Kansas City).
Some historic sites along the Expedition route that have been approved for Registered National Historic Landmark status are: Arrow Rock State Park near Marshall (Santa Fe Trail); the Patee House at St. Joseph (Pony Express); the Utz site near Marshall (prehistoric Indian remains); and Fort Osage near Independence. The latter site, built in 1808, was selected by Captain Clark originally. These sites have been registered by the Department of the Interior because of their exceptional value and national significance in commemorating and illustrating the history of the Nation. They are not administered, however, by the Department.
An inventory of historic, wildlife, and other recreation areas including some of those already mentioned shows 56 existing and 30 proposed areas totaling 70,878 acres within about 25 miles of the Lewis and Clark Trail in Missouri. Twenty-three of the existing areas and all of the proposed areas will provide water-based recreation. In addition, facilities existing and to be developed include camping, picnicking, hunting, hiking, horseback riding, sightseeing, and nature study. A detailed list of existing and proposed points of recreation and historic interest along the Trail, with pertinent information concerning each, is found in the tables on pages 122 to 128.
Missouri has long been one of the Midwest's outstanding vacation States. Its climate permits a long vacation and camping season, extending from early spring until late fall. Superb fall colors attract thousands of tourists. Its many rivers afford float trips that can last for hours or weeks; in fact, Missouri's portion of the Expedition route can be floated throughout its entirety because there are no dams to obstruct navigation. Good fishing and hunting can be found on the streams, lakes, and refuges in the vicinity of the Trail.
Outdoor recreation in Missouri is big business. Tourist expenditures have jumped from $218,600,000 in 1951 to $560,200,000 in 1964.
Population increase, higher incomes, and more leisure time will intensify the demand for outdoor recreation. The Interstate Highway System is producing a revolution in the use of outdoor recreation facilities. More and more people are traveling greater distances to enjoy the out-of-doors.
The 1964 population of Missouri was about 4,475,000. By 1976, it is estimated that the population will be 5,003,000, with 80 percent residing in urban areas. By the year 2000, the population is expected to reach 7,015,000.
Recreation demands are reflected to some extent by traffic volume. Outside the St. Louis metropolitan area, and west of the Missouri River, Interstate 70's average 24-hour volume is 17,301 vehicles; it continues to carry 5,000-6,000 across the State to Kansas City. This highway crosses the river (affording access to the Trail) twice, at St. Charles and at a point just west of Columbia (Rocheport).
U.S. Highway 50, south of the river, carries a flow of approximately 2,000-5,000 vehicles per 24-hour period. U.S. Highway 24, paralleling the river in western Missouri, averages between 2,000-3,000.
In 1964 the Corps of Engineers in an intensive recreation study of the Missouri River from Rulo, Nebraska, to St. Louis reported that there is only one developed public picnic area along the Missouri River in the State of Missouri. No developed camping areas exist along the river, although camping is permitted on certain adjacent properties owned by the State Conservation Commission.
Despite the lack of facilities, 154,000 visitors use this portion of the river annually. With development of facilities, the Corps estimated that use would double.
Accordingly, the Corps proposed construction of 32 small public use areas along the banks of the Missouri River. Twenty-seven areas would be located in Missouri. Wherever possible, the sites of Lewis and Clark Expedition encampments would be used. However, it has been impossible to locate some of the original campsites and others are not suitable for recreation development. Each of the proposed public use areas would have an improved access road, parking and camping space, and water and sanitary facilities. In most instances, boat ramps and group shelters would also be constructed.
The Corps recreation plans have been endorsed enthusiastically by most municipal, county, and State agencies concerned. Many of these agencies intend to expand the facilities originally suggested by the Corps.
The basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Trail is outlined in the Recommended Program, page 20. The recommended routing of a Lewis and Clark Trail Highway in Missouri is indicated on maps 1-4. Specific recommendations relating only to Missouri follow:
1. Marking of the Lewis and Clark Trail should be closely coordinated with marking of the proposed Ozark Frontier Trail. The Ozark Trail will parallel the Missouri River on the east-west route across the State and thus follow the same roads as the Lewis and Clark Trail for part of its length. Since the two trails are intended to open different historic resources to public enjoyment, their marking should be coordinated to avoid any conflict or confusion.
2. Twenty-seven small public use areas should be constructed along the banks of the Missouri River, in accordance with the plans of the Corps of Engineers. Wherever feasible, they should be located in the vicinity of camp sites used by the Lewis and Clark Expedition in 1804 and appropriate signs or other interpretation should be provided for public appreciation of the sites.
The Lewis and Clark Expedition, travelling from St. Louis up the Missouri River toward its unknown headwaters, arrived at the point where the Kansas spills into the Missouri on June 26, 1804. For the next 11 days they camped on what is now the northeast border of the State of Kansas. Some of the sites associated with the Expedition can still be visited today.
This section of the State is picturesque. Rolling wooded hills are interrupted by flat cultivated plains with steep bluffs here and there along the river bank. The river bottoms are covered with willows and other trees which provide shady and attractive settings for recreation.
The potential here for recreation development is great. Although water-based recreation is particularly popular because of the hot winds and high temperatures of Kansas summers, only one public area provides access to the Missouri River. Seven water-based recreation areas are proposed for development by State and Federal agencies on the river. Present recreation uses, through limited private access to the river, include fishing, boating, water skiing, hunting, and fishing. The long vacation season extends from early summer to late fall, and thus recreation areas established along the river would be well justified.
The Lewis and Clark Trail in Kansas passes very near the cities of Kansas City, Leavenworth, and Atchison. All of these urban areas possess outstanding historic sites, associated not only with the Lewis and Clark Expedition route but also with the Indian-military frontier, the Pony Express, the Oregon and Santa Fe Trails, and the Civil War.
Only a short stretch of the Missouri River is followed by a major highway in Kansas. Although secondary roads parallel much of the river, they often lie far from it and by no means constitute a "river drive" route. The road network following the Expedition route does not form a major traffic artery between any two large urban areas. Despite this condition, considerable traffic flow occurs in extreme northeast Kansas, and most recreation points along the Trail can be reached in a matter of a few hours' drive from the greater Kansas City area; from St. Joseph, Missouri; or from the smaller cities of Atchison, Leavenworth, and Topeka, in Kansas. Future water-based recreation areas within a few hours' drive of all of these urban centers will be in great demand.
The stability of the Missouri River for recreation development has been greatly improved by the construction of numerous high-dam reservoirs on its upper reaches. Sediment content of the water also has been reduced, but pollution from other sources remains a real threat to water-based recreation activities.
Insufficient funds have impeded recreation development in Kansas but the establishment of the Land and Water Conservation Fund should help to alleviate this problem in the future.
To assist in meeting Kansas' future recreation requirements and to develop a coordinated program to memorialize what many historians have found to be "the most consequential and romantic peace-time achievement in American history," the State should follow the recommended program as outlined in this report. It is also recommended that the plan of the Corps of Engineers to develop public-use recreation areas along the river be implemented as soon as possible.
The recommended routing of a Lewis and Clark Highway in Kansas is indicated on maps 3-4.
On its trip west, the Lewis and Clark Expedition camped on the Kansas side of the Missouri River for 11 days. The first camp was made "at the upper point of the mouth of the River Kanzas" on June 26, 1804. Its last camp was in Doniphan County on July 9.
Despite the short time the Expedition followed the northeast border of Kansas, the journals record many interesting events.
The men stayed on the "upper point" of Kansas River for three days June 26, 27, and 28 and left at 4:30 p.m. on June 29. The Missouri River was reported to be 500 yards wide at this point and the water of "the River Kanzas had a very disagreeable taste." This spot is now surrounded by the industrial development of Kansas City.
The first of the two courts martial which took place during the journey was at the Kansas River camp, when privates John Collins and Hugh Hall were tried and convicted. Hall was given 50 lashes for stealing whiskey and Collins 100 lashes for being drunk on post and for permitting the theft.
"On the banks of the Kanzas reside the Indians of the same name, consisting of two villages, one at about twenty, the other forty leagues from its mouth, and amounting to about three hundred men." Buffalo and beaver were observed for the first time by the explorers in this area.
On June 30 the party camped on the Kansas side. On July 1, they camped on an island opposite the present city of Leavenworth and on July 2, they camped on the Missouri side, opposite the site of Fort Cavagnolle, a French post of the 1740's and 1750's built to protect French fur traders. This was in present northeast Leavenworth County. Their camp July 3 was on the Kansas side above an old trading house where they "found a varry fat horse, which appears to have been lost a long time." This site is above present Oak Mills, Atchison County.
On July 4, they "ussered in the day by a discharge of one shot from our Bow piece . . ." About four miles upstream they again landed on the Kansas side "to refresh our selves S. Jos. Fields got bit by a Snake, which was quickly doctored with Bark by Cap. Lewis (The poltice was of bark and gunpowder) . . ."
Ten miles farther upstream they named a Kansas creek "4th of July 1804 Creek . . . Capt. Lewis walked on Shore above this Creek and discovered a high Mound from the top of which he had an extensive View, 3 paths Concentering at the moun . . ." This was at the present city of Atchison, Kansas.
Lewis and Clark gave the impression that they camped on the Kansas side on July 4, although Sgt. Patrick Gass states that they camped on the north, or Missouri, side. The captains indicated they camped above Independence Creek, near an old Kansas Indian village. "We closed the (day) by a Descharge from our bow piece, an extra Gill of whiskey." This was near present Doniphan, Kansas.
On July 5, the Expedition camped on the Kansas side. Deer were not so abundant, but the tracks of elk were numerous. On July 6, they again made their camp on the Kansas side and the bird called whip-poor-will sat on the boat for some time. Their camp of July 7 was on the Missouri side, at the mouth of Nodaway River. Swans were seen in this locale, and a wolf was killed. On July 8, their camp was on the Missouri side at the head of Nodaway Island where five of the party were ill with violent headaches.
On July 9 the men camped on the Kansas side above Wolf River, Doniphan County, and on July 10 the camp was on the Missouri side. The next day they left the Kansas area.
On the return trip the Expedition passed through the Kansas area, probably between September 13 and September 15, 1806.
The Missouri River serves as the boundary between the States of Kansas and Missouri for a distance of 112 miles. As a result, the Expedition spent very little time in what is now Kansas compared to the other States along the route. However, considering the recreation facilities, both existing and proposed, Kansas holds a significant position in the development of a "recreation ribbon" along the Lewis and Clark Trail.
Some sites associated with the Expedition may be visited by the modern-day explorer who is interested in following the route taken by Lewis and Clark. Many of the campsites used by the Expedition have been located along the river. Kaw Point, in the heart of Kansas City, where Lewis and Clark camped for three days, can be visited. A monument has been erected in Atchison to Lewis and Clark, who camped nearby on July 4, 1804.
The Trail, in Kansas, passed through or very near the urban areas of Kansas City, Leavenworth, and Atchison. All of these cities possess outstanding historic sites associated not only with the Lewis and Clark Expedition but also with the Indian-military frontier, the Pony Express, the Oregon and Santa Fe Trails, and the Civil War. The Wyandotte County Historical Museum (second floor of the Memorial Building) in Kansas City contains a rare collection of historical artifacts.
Twenty-three miles upstream from Kansas City is Leavenworth, the oldest city in Kansas. Fort Leavenworth, the oldest continuously operated military post west of the Missouri, was established in 1827 by Colonel Henry Leavenworth for protection against Indians and as a starting point for wagon trains. The Command and General Staff College, famous as the Nation's most important post-graduate military institution, is located here. The post also maintains a museum. Fort Leavenworth has been designated by the Secretary of the Interior as a Registered National Historic Landmark. At Leavenworth are Santa Fe and Oregon Trail markers, and the old wagon path up the steep river bank still can be traced between the trees.
A half-hour's drive upstream from Leavenworth brings one to Atchison. The 120-acre Jackson Park at Atchison, overlooking the Missouri River, is one of the most beautiful in the State. In Atchison's courthouse square is a plaque marking the spot where Lincoln in 1859 delivered his Cooper-Union speech and the hall where the Santa Fe Railroad was organized in 1860 can be visited.
A short distance west of the Missouri River, tourists may visit three Indian reservationsthe Kickapoo Indian Reservation, just west of Horton, and the Iowa and the Sac and Fox Indian Reservations, northeast of Hiawatha.
The extreme northeast section of Kansas, bordering the Missouri River, is an area of rolling wooded hills, interrupted by flat cultivated plains with some steep bluffs along the river bank. Portions of the undisturbed oak-hickory woodlands and the Missouri River bluffs offer top recreation potential.
The summers in Kansas can become quite hot and the winters very cold, but the Sunflower State's vacation season is long, extending from early spring until late fall. Hot winds and high temperatures of the summer make water-based recreation very popular. The contiguous areas along the river are covered with willows and other trees which would provide shade and an attractive setting for recreation areas.
The State is well endowed with water for recreation. The numerous State and Federal flood control projects and local impoundments create nearly 90,000 surface acres of water, held by 36 State lakes, four State waterfowl refuges, eight Federal reservoirs, 42 city and county lakes, and numerous farm ponds. Lake areas have quadrupled during the past 20 years. Some 23 additional Federal reservoirs are scheduled for completion by 1975.
There is only one existing public recreation area affording access to the Missouri River in Kansas. This is a public boat ramp at the city of Atchison. Some private marinas and improvised boat docks along the bank provide limited access to the river.
There are four roadside parks beside the highways along the river from Kansas City to the Kansas-Nebraska line. Overnight camping, however, is permitted only at those roadside parks with sanitary facilities.
Within about 25 miles of the Lewis and Clark Trail in Kansas there are 29 existing and 8 proposed points of recreation interest. Seven areas provide water-based recreation; seven additional water-based recreation areas are proposed for development by State and Federal agencies on the Missouri River. The total area of land and water included within the 37 existing and proposed recreation sites is over 10,500 acres. Of this amount, approximately 7,500 acres of land and water are included within two of the eight proposed sites. Facilities existing and to be developed provide opportunities for most forms of water-oriented recreation, and also for camping, picnicking, hunting, hiking, horseback riding, sightseeing, and nature study.
A detailed list of existing and proposed points of recreation and historic interest along the Trail, and pertinent information concerning each, are found in the tables on pages 126 to 128.
The entire Nation has witnessed an increasing demand for all types of outdoor recreation opportunities. Kansas is no exception. As an example, visits to existing Federal reservoirs have steadily increased from 700,000 in 1950 to 8,300,000 in 1963. Specific data necessary to make accurate projections of demands for facilities along the Lewis and Clark Trail are not available. To a great extent, the demand for areas along such a trail would depend on the interest aroused in the Trail and on the quality of the effort made to identify, mark, and develop areas along the route. In order to provide some indication of the demand for recreation facilities in Kansas, we must rely on population projections and travel trends for the State as a whole.
The 1964 population of Kansas was about 2,200,000 as reported by the Kansas Industrial Development Commission. It has been estimated that by 1976 the population will increase to 2,502,000, with 80 percent living in the urban areas. The Outdoor Recreation Resources Review Commission forecast a population of 3,727,000 by the year 2000.
In addition to resident contributions, the out-of-State vacationing motorist spent $81 million in Kansas during 1952. In 1962 this figure had more than tripled to $252 million, and in 1964, although the final figures are not yet available, it is expected to be well over $300 million.
Kansans are rapidly recognizing the potential of the tourist industry as a new source of economic wealth. The first State travel conference in the State's history was recently called by Governor John Anderson, Jr. Some 400 key civic and business leaders pledged their full cooperation to develop more travel and recreation attractions. The State's tourist promotion slogan is "Midway USA." Kansas truly lies at the crossroads of the Nation, with major east-west transcontinental highways benefiting its travel and tourism industry. Recreation demand considerations must involve an analysis of this and other road systems as they relate to the Trail.
In Kansas, only about one-fifth of the Missouri River is followed by a primary highway (U.S. 73). Secondary roads (State 5 and 7) parallel the river, but they often lie far from it and by no means constitute a "river drive" route. As a result, this lateral road network, following the Trail in Kansas, does not form a major traffic artery between any two heavily populated areas.
The principal cities lying immediately north of Kansas City on the Missouri River are St. Joseph, Missouri; Omaha, Nebraska; and Council Bluffs, Iowa. Northbound traffic to these large cities could leave Kansas City, Kansas, via State Highway 5 and continue north to Atchison on U.S. Highway 73. Most traffic then would either turn west, leaving the Trail, to follow U.S. Highway 73 north, or turn east to cross the river and follow U.S. 59 to St. Joseph.
U.S. Highway 36, traversing St. Joseph, Missouri, is the primary east-west through highway in northern Kansas and Missouri. This highway lies perpendicular to the Missouri River and crosses it at only one pointElwood, Kansas.
Recreation pressures are shown to some extent by traffic flow statistics. Although pleasure and commercial traffic volumes are combined, making it impossible to isolate the volume of recreation traffic alone, total traffic loads cited for major highways in the section of Kansas adjacent to the Trail (and the Missouri River) and north of Leavenworth are revealing.
From Leavenworth to Atchison the present traffic flow is 1,000 to over 2,000 vehicles per day. U.S. Highway 73, continuing west and north from Atchison, supports a daily flow of from just under 1,000 to nearly 1,500 vehicles per 24-hour period. Little use occurs (traffic flow 500 to 600 vehicles) on State Highway 7, between Atchison and Troy. U.S. Highway 36 from St. Joseph, Missouri, carries a traffic flow across the river and westward in Kansas averaging nearly 3,000, thus making it the heaviest traveled highway near the Lewis and Clark Expedition route north of Kansas City, Kansas.
Although the existing road system along the river is not conducive to high-volume travel, considerable traffic flow exists in extreme northeast Kansas. In addition, any point along the Trail could be reached (if access were available) in a matter of a few hours' drive from the greater Kansas City area, St. Joseph, Missouri, and the lesser urban areas of Atchison, Leavenworth, and Topeka, Kansas.
Future water-based outdoor recreation areas within a short driving distance from all of these urban centers will be in great demand. The recreation development of the Missouri River logically could assist in meeting this need. Specifically, traffic flow data indicate the need for recreation developments along U.S. Highway 73 and State Highway 5, between Atchison and Kansas City, Kansas. Additional scenic enhancement of the roads is needed north of Kansas City paralleling the Missouri River.
The advanced development of highway systems and travel modes will continue to make traffic volume soar. The completion of the Federal Interstate Highway System in the Midwest will play a vital role in determining the effective supply of, and demand for, outdoor recreation facilities in Kansas.
The expected upward trend of Kansas population, with its corollary of greater spending, mobility, and leisure time, will cause increased use of the State's natural resources. Much of this demand from outdoor-minded citizens, particularly in northeast Kansas, will be focused on the Missouri River.
Prior to the Trail proposal, the Kansas State Park and Resources Authority conducted studies to establish two parks for day and overnight use on the Missouri River in Doniphan County. These parks, still in their original primitive state and under private ownership, would receive heavy use from travelers on U.S. Highway 36 and would fill a definite recreation void in this section of the State. One of the proposed parks is near a Lewis and Clark camp at the junction of Wolf Creek and the Missouri River. The Kansas Park Authority has invited the cooperation of county and city governments in the vicinity of the Trail to participate in future development and management of these proposed parks. No other recreation sites along the shoreline of the river are proposed at this time.
With the present dearth of facilities, a definite need exists to provide picnic and overnight camping areas in conjunction with boat-launching ramps and access to the river shoreline. A program to meet this need has been proposed by the Corps of Engineers, Kansas City District, in its "Preliminary Master Plan for Recreation Development of the Missouri River, Rulo, Nebraska, to the Mouth." This plan calls for the creation of 32 small public-use recreation areas along the banks of the Missouri River near Lewis and Clark's 1804 campsites. In addition to the two proposed areas by the State Park and Resource Authority, five such areas are planned by the Corps along the river shoreline from Leavenworth to the Kansas-Nebraska line. Each area would be provided with an improved access road and parking and camping areas, including water and sanitary facilities. Boat ramps and group shelters would also be constructed in most instances.
There are many recreation attractions in this area. Much has been done by local, State, and Federal agencies to provide adequate recreation facilities along the Missouri River, but looking beyond the existing and presently proposed recreation areas, a need for even more facilities clearly can be seen.
The planning, development, and operation of an expanded State program to meet outdoor recreation demands in Kansas have been hampered by insufficient funds, in spite of the State's having spent $20 million since 1951 for outdoor recreation facilities and improvements, an average of $1.7 million a year. This represents less than one cent of every dollar spent for all purposes, and is not enough to satisfy the need. A 10-year (1965-75) $6 million recreation development program by the State Park and Resource Authority is planned at 29 sites throughout the State.
A basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Trail has been outlined on page 20 in the Recommended Program. The recommended routing of a Lewis and Clark Trail Highway in Kansas is indicated on maps 3-4. Specific recommendations relating only to Kansas follow:
1. The main through highways which carry the bulk of traffic into and out of the State, should display adequate exit directions for those desiring to follow the Lewis and Clark Trail Highway.
2. The plan proposed by the Corps of Engineers to develop five public-use recreation areas along the river shoreline from Leavenworth to the Kansas-Nebraska line should be pursued and implemented by the public agencies concerned as soon as practical.
3. Further investigations should be made to provide public-use recreation facilities in close proximity to the Kansas City area.
4. Consideration should be given to future expansion of scenic roads along the river from Atchison north to the Nebraska border.
The Corps of Discovery, as the Lewis and Clark Expedition frequently is called, entered what is now the Iowa and Nebraska portion of the Missouri River Valley on July 18, 1804, outward bound, and spent nearly a month in the area. On the return trip in 1806 they spent only six days there. During July and August of 1804, Lewis and Clark named many of Iowa's streams, counselled with the Indians, and found a wide variety of wildlife inhabiting the bottom lands and adjoining plains. Council Bluffs takes its name from their historic conference with the Oto and Missouri Indians on August 3, 1804, although the meeting with the Indians took place a considerable distance upstream and on the Nebraska side. The only fatality of the Expedition was that of Sergeant Charles Floyd, who was buried on what is now Iowa soil.
The sandbar-studded Missouri courses past the western border of Iowa for some 192 miles. The terrain is mostly flat, with a rare outcropping of bluffs. There are numerous wooded areas, islands, and marsh lands along the Iowa shore, most of which are in private ownership.
For centuries the river has moved about freely, hunting new channels, abandoning old, sometimes adding to the shoreline and sometimes subtracting from it. In recent years channelization and construction of reservoirs by the Corps of Engineers have resulted in better control of water levels and have reduced the threat of floods. Most of the channelization work is complete.
The Missouri River provides recreation for the hunter, the fisherman, and the casual boatman. Since the river constitutes an important branch of the central flyway, large flocks of ducks and geese follow the Iowa border each spring and fall. The adjacent bottomlands offer good small and upland game hunting and fine habitat for white-tailed deer. Sport fishing is minor on the river but is enjoyed on the many natural oxbow lakes formed by the river. The enthusiasm of Iowans for lake boating has placed it among the top boating States and this pastime is now becoming popular on the Missouri River as well.
There are 27 existing and 30 proposed points of recreation interest within about 25 miles of the Lewis and Clark Trail in Iowa. Fifteen areas provide water-based recreation and 25 additional water-based recreation areas are proposed for development by State and Federal agencies on the Missouri River.
Two recreation development plans for the Missouri River have been prepared. The Iowa State Conservation Commission's plan proposes development of 25 recreation areas including four on the Nebraska side of the river. The other plan, developed by the Corps of Engineers, calls for the creation of numerous small public-use recreation areas located along both sides of the Missouri River.
The recommended routing of a Lewis and Clark Trail Highway in Iowa is indicated on maps 4-6.
Future development of recreation areas along the Missouri River should follow the plans of the Iowa State Conservation Commission and those of the Corps of Engineers. The State boundary problem between Iowa and Nebraska and the problem of land ownership laws should be resolved.
The Expedition reached the southwest corner of what is now Iowa on July 18, 1804. As they travelled through the area, they camped on sand bars and alternate banks of the river.
During their 33-day journey upstream along this stretch of the river the Expedition had a number of interesting experiences. On July 20 a large yellow wolf was killed and a large water snake tried to feast on a deer the party had killed and placed on the river bank. Tracks of bear were observed and dens of rattlesnakes seen. "Musquitors" were everywhere. The meat diet of the men was a virtual wild-game chef's dreamturkey, geese, catfish, deer, elk, and beaver.
On July 21, 69 days after leaving Camp Wood, the party reached the mouth of the Platte River, which was considered the dividing point between the Lower and Upper Missouri. Clark recorded a distance of 600 miles from their starting point; the distance between the same two points now is given as 611 miles. The captains decided to hold a council with the Indians and sent couriers to bring them in while the Expedition continued up the river about 50 miles to select a place for the meeting. This they called Council Bluff. The location was approximately 13 miles north of present-day Council Bluffs, Iowa, on the Nebraska side of the river. On August 3, 1804, they held council with the Oto and Missouri Indians. Both Lewis and Clark made speeches telling the Indians about the change in government from Spain to the United States, promising them protection, and giving them advice on how they should conduct themselves.
The next day, two members of the partyReed and La Libertedeserted. Drouillard and three men were sent out to find the deserters, which they did, but La Liberte escaped on the way back. After leaving council with the Indians and before Drouillard and the deserter rejoined the Expedition, Lewis and Clark passed an island, two miles above the Little Sioux River, which they named Pelican Island because of the number of pelicans feeding on it. On August 18, Reed, the captured deserter, was sentenced to "run the gauntlet four times through the party, each man to have nine switches, and for him not to be considered one of the party in the future." On the same day, Captain Lewis celebrated his birthday and the men were given "an extra gill of whiskey" and allowed to dance until 11 o'clock.
On August 20 the only fatality of the entire Expedition occurred when Sergeant Charles Floyd died of what is believed to have been a ruptured appendix. He was the first United States soldier to die west of the Mississippi River. Lewis and Clark buried Floyd with military honors on the top of a bluff. A half mile below the bluff is a small river which the Expedition named for Sergeant Floyd. The site is now a park in the southern part of present Sioux City, Iowa. His grave, marked by an obelisk monument, has been designated a Registered National Historic Landmark.
The Expedition's progress up the Missouri is shown by the men's records of the dates on which they passed the mouths of streams draining present-day Iowa. Boyer was passed on July 29, 1804; Soldier, August 6; Little Sioux, August 12; Floyd, August 20; Big Sioux, August 21. On that day the Expedition left the Iowa area.
On the return trip in 1806, they passed through this section between September 4 and 10. They paid a visit to the grave of Sergeant Floyd and camped at or opposite several of their old camp sites.
A number of sites in Iowa directly related to the Lewis and Clark Expedition can be visited today. One of these is the 286-acre Lewis and Clark State Park, located 59 miles north of Council Bluffs and two miles west of Onawa, near the point where the Expedition camped on August 8, 1804, on their westward trek. The park offers swimming, boating, fishing, camping, picnicking, and hiking trails.
A monument marking the grave of Sergeant Floydthe only member of the Expedition to lose his lifeis just south of Sioux City. A half century after his burial, his grave was disturbed by the eastward movement of the river current, necessitating the reburial of his bones somewhat higher on the same bluff. The obelisk of white sandstone rises 100 feet above the bluff and can be seen at a distance as Sioux City is approached from the south on U.S. Highway 75.
Just four miles north of the business district in Council Bluffs is a monument erected in 1935 to commemorate Lewis and Clark's council with the Oto and Missouri Indians in 1804. Sculpturing on the south panel shows the Indians bringing melons and fruits to exchange with Lewis and Clark for medals and flags, while the north panel depicts the meeting of Lewis and Clark with the Indian chiefs in full ceremonial regalia. Also in Council Bluffs is the Mormon Trail Memorial commemorating the passing of the Mormans through the city on their trek westward.
The Lewis and Clark Trail in Iowa is well endowed with other historic sites and recreation areas. The first important recreation area along the Expedition route found upstream from Missouri is the 1,100-acre Waubonsie State Park with its many trails for hiking and with excellent camping grounds where Indians once headquartered. The park, located about nine miles northwest of the town of Hamburg, has much of historic value to recommend it. A few miles east of the park is the 940-acre Riverton Game Management Area. Lake Manawa State Park, created by a change in the course of the Missouri River, lies a mile south of Council Bluffs and includes 919 acres on Lake Manawa. Facilities for camping, swimming, boating, fishing, hunting, and picnicking are offered there. There are also several wildlife areas along the river, and other recreation areas are proposed for development by the State and the Corps of Engineers.
Within about 25 miles of the Lewis and Clark Trail in Iowa there are 27 existing and 30 proposed points of recreation interest. Fifteen areas provide water-based recreation and 25 additional water-based recreation areas are proposed for development by State and Federal agencies on the Missouri River. Combined, the recreation sites total about 27,500 acres. Facilities existing and to be developed provide opportunities for most forms of water-oriented recreation and also for camping, picnicking, hunting, hiking, horseback riding, sightseeing, and nature study.
A detailed list of existing and proposed points of recreation and historic interest along the Trail, and pertinent information concerning each, are found in the tables on pages 128 to 132.
In past years the uncontrolled Missouri River, stretching some 192 miles along the western border of Iowa, was fast-running and moved about freely, cutting new channels, abandoning old, and always adding or subtracting from the shoreline. Channelization and construction of upstream reservoirs by the Corps of Engineers have improved the control of water levels and reduced the threat of floods. Channelization work is complete from De Soto Bend on the Harrison-Pottawattomie County line down to the Iowa-Missouri State line. Some work remains to be done up-river to Sioux City.
The Missouri River Valley, a major branch of the central flyway, is one of the most important routes for waterfowl in the United States. Many of the natural oxbow lakes are used as resting areas for migrating waterfowl. Although much of the original habitat for ducks and geese has been eliminated through farming practices, the ravages of floods, and the filling of shallow ponds by bulldozers, hundreds of thousands of ducks and geese follow the Missouri River each spring and fall and provide a major recreation resource. Probably the world's largest concentration of snow and blue geese congregate in the bottom lands below Council Bluffs each spring. Nationwide publicity has attracted thousands of people to view the great natural spectacle which these birds provide.
In this portion of the river valley thousands of acres of marsh, water, islands, and shoreline hold great potential for outdoor recreation development. Many of these areas are in private ownership. The sand dunes and many of the islands would lend themselves well to public-use recreation development if the State boundary problem were resolved.
Sport fishing on the Missouri River is confined largely to natural oxbow lakes, although there is increased interest in river angling. The principal species caught are bullheads, catfish, and carp in the river and bluegills, crappies, large-mouth bass, and sauger in the oxbow lakes.
The bottom lands along the Missouri provide good habitat for white-tailed deer and numerous small game species. On the State-owned islands, which are primarily covered with stands of softwood trees, an opportunity exists for multiple-use management of timber, recreation, and wildlife.
Iowa's outdoor recreation facilities are receiving increasing use from residents and tourists alike. Visits to Iowa State Parks in the vicinity of the Lewis and Clark Trail between Sioux City and the Iowa Missouri State line have increased from 1,070,694 in 1958 to 1,182,657 in 1963. The parks surveyed include Waubonsie, Lake Manawa, Preparation Canyon, Lewis and Clark, Brown's Lake, and Stone.
The number of registered boats places Iowa near the top among the States in their use. Boating has been done mostly on lakes but it is being enjoyed increasingly on the Missouri River. Boating itself is the most popular way in which the river is used for pleasure; picnicking is second and water skiing is third. Sport fishing, too, is important.
People today are willing to travel a considerable distance to find recreation. For instance, a 1961 study revealed that 24 percent of Iowa hunters and fishermen drive as much as 100 miles in their activities. Another 18 percent drove between 100 and 250 miles.
The 1964 population of Iowa was approximately 2,750,000. By 1976, it is estimated that the population will be 3,266,000 with 80 percent living in the urban areas. By the year 2000, a population of 4,514,000 is forecast by the Outdoor Recreation Resources Review Commission.
All of the existing recreation facilities along the Missouri River are receiving heavy use. With the potential of an increasing population, future recreation demands will be enormous.
Approximately 13 percent of Iowa's population lives within 50 miles of the Missouri River and 23 percent within l00 miles. On the Nebraska side, approximately 35 percent of the State population lives within the counties along the Missouri River. Therefore, development on the Iowa side may receive heavier use by Nebraska residents than from Iowa residents.
Two of Iowa's largest cities, Council Bluffs and Sioux City, are located on the Missouri River. At the time of the 1960 census, 674,656 people lived within a 50-mile radius of Council Bluffs and the total population within 50 miles of Sioux City was 292,127. In the near future, these cities will be connected by Interstate 29, which will generally parallel the river.
In June 1963 a completed section of Interstate 29 between Council Bluffs and U.S. 30 in Pottawattamie and Harrison counties carried a daily traffic flow of both pleasure and commercial traffic which averaged 4,650 vehicles. By 1975 this section is expected to carry an average of 15,352 vehicles per daymore than three times the 1963 figure. Use of the section of Interstate 29 between Onawa and Sioux City also is expected to triple from 2,946 vehicles in 1963 to 9,504 per day by 1975. Eventually this highway will carry the greater portion of north-and south-bound traffic between the heavily populated areas of Kansas City, Kansas, and Kansas City, Missouri, to the south and Sioux Falls, South Dakota, to the north. This highway may become the major north-south traffic route for the upper Midwest.
With the completion of Interstate 80, a direct connection between the large urban centers of the Great Lakes region and the Lewis and Clark Trail will be established. Most west-bound motorists from the Chicago area will pass through Des Moines and cross the Trail in the vicinity of Council Bluffs. Highway travelers will be able to turn north or south at this point to follow the Trail.
Use of recreation areas along this route will, accordingly, grow rapidly if proper markers and directional information are provided the motorist. Properly developed water-based recreation areas within a few minutes' drive of Interstate 29 should receive especially heavy visitation.
The improving highway facilities and the increased use of modern camping vehicles and equipment will continue to boost the interstate travel volume. The completion of the Interstate Highway System in the Midwest will play a vital role in determining the effective supply of and demand for recreation facilities in Iowa. The demand for more campgrounds and facilities of all kinds can be expected. Development of the Missouri River Valley's recreation potential logically could meet this demand.
A plan for extending the recreation use of the Missouri River has been developed by the Iowa State Conservation Commission. A report on this plan, entitled "Part 1 of the Missouri River Planning Report," was published in January 1961. This comprehensive report presents the existing (1961) situation on channel development, problems of land and water ownership and State boundary disputes, and proposed development of 25 recreation areas. Four of the areas would be on the Nebraska side of the new channel and 21 on the Iowa side. These areas are shown on the accompanying maps as proposed recreation sites. Four additional areas were thought possible at the completion of channelization by the Corps of Engineers. One fish and wildlife area, the 7,800-acre De Soto National Wildlife Refuge, is already being developed by the Bureau of Sport Fisheries and Wildlife.
A second report is under preparation by the Omaha District Office of the Corps of Engineers. Entitled "A Preliminary Recreation Master Plan for the Missouri River, Rulo, Nebraska, to Sioux City, Iowa," it is due for completion in the near future. It will be a companion to a similar report for the Missouri River from Rulo, Nebraska, to St. Louis, Missouri. The latter report was published March 1964 and has been approved by the Corps of Engineers.
The Corps plan will call for the creation of numerous small public-use recreation areas along alternate banks of the Missouri River. The first report had proposed developing public-use recreation areas close to Lewis and Clark campsites of 1804. However, locating and marking the actual campsites has proved impossible in many cases. Consequently, the plan will suggest that markers be placed in the public-use areas to explain the Expedition and to indicate the probable location of the nearest campsite. Each area would have an improved access road, parking, campsites, water, and sanitary facilities. Boat ramps and group shelters also would be provided in most instances.
1. The State Boundary
A boundary dispute between Iowa and Nebraska is the principal problem hindering the development of the Missouri River's recreation potential in both States. Failure to agree on a new State boundary, based upon the new stabilized channel of the river, has left a legal and physical tangle that has severely retarded recreation expansion.
Since Iowa became a State, the boundary between Iowa and Nebraska has been the center of the channel of the Missouri River. Because of the natural changes in the channel since 1877 and further alterations caused by channel stabilization by the Corps of Engineers, the river in many places has left the old historic river bed. Redefinition of the State's boundary therefore has been necessary. In 1943 Iowa and Nebraska compromised on a new boundary and defined it as the center of the channel. The agreement subsequently was incorporated in the Code of Iowa in 1958.
Additional channel work has been undertaken between the two States in recent years. As a result, for some 40 miles the river now passes wholly within the State of Nebraska because the State boundary still follows the maps adopted in the 1943 compromise. Consequently, several thousand acres of land and water within the State of Nebraska now lie east of the new channel. Some Iowa lands and waters now lie west of the channel.
Many oxbows, cut off by the channel work, are east of the new channel and are made up of both Iowa and Nebraska lands. If Iowa were to develop these oxbows for recreation, the State would be expending funds for development that would be of benefit mainly to citizens of Nebraska. Without some form of cooperative program with Nebraska, Iowa may well be unwilling to spend State funds for access to island areas within Nebraska. Nebraska, however, has no law allowing a reciprocal agreement with Iowa on boundary waters. Moreover, the State of Nebraska has not authorized use of the power of eminent domain to acquire any of these areas through condemnation. As a result, even Iowa's cooperation in a Nebraska development project could be defeated by the refusal of one Nebraska landowner to sell.
The boundary issue also creates problems of wildlife law enforcement. A Nebraska resident wishing to hunt in an oxbow cut off from the river would have to enter the area over Iowa ground. Reciprocal fishing regulations have been established, however, that are satisfactory to both Iowa and Nebraska.
Members of the Iowa legislature have been appointed to a committee to meet with a similar committee in Nebraska to work out a mutually acceptable solution to the boundary dispute. A number of such committees have existed over the years, and have made recommendations which have not been approved by either legislature. The committees have recommended to their respective legislatures 1) that the boundary between the States be established at the median line, or middle of the Missouri River, as it is now stabilized by the Corps of Engineers; and 2) that the boundary would remain the middle of the stabilized channel, as determined by any future changes through channelization work. The Governors of both Nebraska and Iowa have concurred in the committee's recommendations and have asked their legislatures for affirmative action.
If such action is taken by the State legislatures, Carter Lake legally would become a part of Nebraska. At the present time the entire economy of Carter Lake is dependent upon Omaha. From an economic standpoint, Iowa will receive two and one-half acres under this agreement for every acre that the State of Nebraska acquires. Iowa also will receive approximately two and a half dollars to three dollars greater value for every one dollar that Nebraska will get in reciprocity from Iowa, because of the land values involved in the interchange.
2. Land Ownership Laws
A difference in land-ownership laws in Iowa and Nebraska poses complex problems concerning public acquisition and recreation improvements on certain areas along the Missouri. Project development is further hampered by the cloudy title to lands on the Iowa side of the river believed to be owned by the State. Lack of knowledge on exact ownership boundaries prevents both Iowa and Nebraska from acquiring lands needed for access to water or for shoreline development. There is a difference in State law in Iowa and Nebraska affecting public ownership, and in Iowa there is the matter of "quieting title" to lands believed to be State owned.
In Nebraska the law provides that riparian owners have title to the bed of the river to the center of the channel or to the described boundary line, whichever the case may be. Thus all lands in a proposed project area lying west of the Iowa boundary but east of the new channel are in Nebraska and privately owned. Such lands, of course, must be purchased for project improvements. This brings forth the unique question, "Can the State of Iowa own lands in another State?"
In Iowa the law states that all lands below the mean high-water mark and the center of the channel, or a described boundary line, are property of the State of Iowa. Thus it is conceivable that Iowa could sell lands to Nebraska owners that lie west of the new channel. Islands in meandered streams are held to be the property of the State. The jurisdiction to establish and mark boundary lines between State property and privately-owned property in meandered streams is vested with the Iowa State Conservation Commission. Private individuals have contested the right of the State to own bottom lands under this law, and have brought them to court test. In at least one decision the courts have declared islands to be State owned. Quieting title to such lands involves the slow but necessary actions of the courts.
A basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Trail was outlined in the Recommended Program, page 20. Recommended routing of a Lewis and Clark Trail Highway in Iowa is indicated on maps 4-6. Specific recommendations relating only to Iowa follow:
1. Public-use areas should be developed as recommended in the Iowa State Conservation Commission plan published in 1961 as "Part 1 of the Missouri River Planning Report."
2. Completion of the recreation development plans of the Corps of Engineers for the Missouri River from Rulo, Nebraska, to Sioux City, Iowa, should be expedited. Lewis and Clark Expedition campsites should be chosen for protection, interpretation, and development for public use and enjoyment, whenever possible. State and local agencies should participate in the development program and give prompt attention to the proposals.
3. Because future channel work will cut off many oxbows possessing high potential as public recreation areas, efforts should be made to protect these oxbows from sand-carrying river flows by strategic placement of impervious levees at their upper or lower ends, or both.
4. Until legal complications are resolved, the numerous islands in the Missouri River flood plain which possess possibilities for recreation development should be preserved and protected in their primeval state. Their permanent management in the public interest should be planned as soon as possible.
5. The long-time boundary dispute between Iowa and Nebraska should be resolved through the joint acceptance of the interstate committee's recommendations by the Iowa and Nebraska legislatures.
6. The Iowa and Nebraska interstate boundary committee should study the problems arising from the differences in state land ownership laws and present recommendations for legislative action to make the respective legal changes necessary to resolve these problems.
The Lewis and Clark Expedition on its outward journey spent nearly two months on the section of the Missouri River forming the northeastern border of present-day Nebraska. Using the river as its highway, the Expedition passed along the shore of what is now Nebraska, on the one side, and on the other saw lands which were to become Missouri, Iowa, and South Dakota. Campsites were selected on either shore as conditions warranted. Ascending the river, the Expedition continued with its assigned task; exploring along the shore, counseling with the local Indians and observing their ways, and making copious notes on the geography of the area and its flora and fauna. Many of Lewis and Clark's campsites and council locations still can be found and visited.
Nebraska was the scene of many of the activities connected with the opening up of the West. Major historic themes could be developed along the Lewis and Clark Trail, including the early exploration and fur trade, the overland migrations, the Indian wars, and the homestead movement. Many of the forts, fur posts, and other landmarks of the early days are still intact. Both the Oregon and Mormon Trails cross the Lewis and Clark Trail in Nebraska. Four Indian reservationsWinnebago, Omaha, Ponca, and Santeelie along the Trail.
An acute need for recreation facilities is evident in Nebraska, especially in the southeast, where 68 percent of the State's population lives. Because of the hot, humid summers, water-based recreation is particularly desirable, and the Missouri River, with its scenic bluffs, numerous islands, and wooded shorelines, could provide for such activities.
Many of the oxbow lakes formed by the river are in recreation use; excellent warm water fishing and waterfowl hunting can be had here. At Lewis and Clark Lake outdoor enthusiasts can enjoy fishing, boating, swimming, water skiing, picnicking, and overnight camping. Year-round fishing along the Missouri, good game bird hunting in the uplands, plus rodeos and Indian pageants, are also to be found.
Nebraska's two largest citiesOmaha with a 1960 population of 301,598 and Lincoln, 128,521are within an hour's drive of the Trail. These urban centers have increased 26 percent in population during a 10-year period, and there is no reason to expect this trend to change. With the completion of the Federal Interstate Highway System in the Midwest, a substantial increase in tourist traffic is inevitable. The magnitude of future recreation demands will be considerable along the nearly 400 miles of the Trail in Nebraska.
There are 66 existing and proposed points of recreation interest within about 25 miles of the Lewis and Clark Trail in Nebraska. Nineteen areas provide water-based recreation and 24 additional water-based recreation areas are proposed for development by State and Federal agencies on the Missouri River.
The State boundary problem between Iowa and Nebraska has been a deterrent to the development of the Missouri River's recreation potential in this area. A difference in land ownership laws between these States also poses complex problems concerning public acquisition and recreation improvements in certain areas. When these issues are resolved, there will be numerous islands, sand dunes, and shoreline areas that can be developed to complement existing recreation sites.
Recommended routing of a Lewis and Clark Trail Highway in Nebraska is indicated on maps 4-7.
On July 11, 1804, the Lewis and Clark Expedition entered the portion of the Missouri River lying between the present States of Nebraska and Missouri. That night they camped on a large island, immediately opposite the Big Nemaha River, and remained camped there all the next day. Captain Clark ascended the river about two miles in a pirogue and reported finding several artificial mounds or graves. About one-fourth mile below the mouth of the river, a cliff of free stone was observed, with various inscriptions and marks made by Indians. The Expedition ran into a sudden and severe squall on July 14, and later made camp about the Nishnabotna River on the Missouri side. On July 15, the group camped on the Nebraska side, progressing approximately nine and three-fourths miles upstream. They passed a large island (probably Sonora Bend) on July 16 and a few miles further upstream they reported a cliff of sandstone that extended for two miles along the river and was frequented by birds. Some 20 miles past Bald Island the captains came to a large prairie and named it Baldpated Prairie. Camp was made on the Missouri side and the party remained there the next day. On July 18 they camped on the south, opposite the lower point of the Oven Islands, a little below present-day Nebraska City. This day's journey carried the expedition past the boundary between Missouri and Iowa, entering the section of the river separating Nebraska from Iowa.
On July 19 the expedition passed high cliffs of yellow earth on the south, near two "beautiful runs of water." The sand bars were becoming more numerous and troublesome. Camp was made on the western extremity of an island in the middle of the river, near the present boundary between Cass and Otoe Counties. Having traveled some 18 miles, on July 20 the group again camped on the Nebraska shore. A party walking along the shore found the plains rich but very parched from frequent fires and with practically no timber. On July 21, after covering some 14 miles in the rain, the party reached the Platte River which they estimated to be 600 yards wide at its mouth. Both captains ascended the Platte for about a mile, and reported the current very rapid and the river divided into a number of channels, none of which was deeper than five or six feet.
On July 22 the Expedition set sail from the mouth of the Platte, passed Papillion and Mosquito Creeks, and camped on the Iowa side near the present-day town of Bellevue, Nebraska. The party remained at this camp until July 27. During these five days they sent two of their party to the Oto or Pawnee villages with a present of tobacco and an invitation for the chiefs to visit their camp. The messenger returned unsuccessfully after two days, reporting that the Indian villages were deserted.
From July 27 to July 30 the party moved upstream, camping first on one side of the river and then on the other. The captains decided to hold a council with the Oto and Missouri Indians and on July 29 sent couriers to bring them in. On July 30 they camped near Fort Calhoun, Nebraska, in a grove at the edge of a ridge which stood some 70 feet above a plain covered with grass five to eight feet high.
At sunset, August 2, about 14 Oto and Missouri Indians and a Frenchman named Fairfong arrived. The council was held the following morning and the captains announced the change in government from Spain to the United States, promised protection, and gave advice on how the Indians should conduct themselves in the future. Numerous presentsmedals, flags, paintwere given to the Indians.
It was at this site that the name Council Bluffs was mentioned. Both captains thought the location was an exceedingly favorable spot for a fort and trading post, as the soil was well "calculated" for bricks and there was an abundance of wood in the neighborhood. The location was also central to the Oto, Pawnee, and Omaha, and within range of some of the Sioux Indians. Here is the origin of the name Council Bluffs, although the city of that name is much below the exact spot where these incidents took place, and is on the other side of the river.
From August 4 to 8 the Expedition continued northward, reaching an island where a number of pelicans were feeding. They named it Pelican Island and out of curiosity shot one of the birds and poured five gallons of water into its bag.
The burial place of Blackbird, one of the great chiefs of the Omahas, who had died several years before from smallpox was visited on August 11. On August 13, camp was made at Omadi, Dakota County, Nebraska. At this camp the captains sent a party of men up the Omadi River to an Omaha Indian camp. The village at one time consisted of 300 cabins, but was burned several years before, after having been ravaged by smallpox. Still waiting for the Indians on August 16, the men made a seine of willows and bark, and their first drag in the river brought up 318 fish.
On the afternoon of August 18, a party of Oto Indians arrived at the camp, along with the French interpreter and one of the deserters, Reed. A trial was held for the deserter and he was sentenced to run the gauntlet four times.
On August 19, a council was held with the chiefs and warriors and presents were distributed and the same speech and advice given at Council Bluffs was repeated. The next day the party set sail and landed about 13 miles north at the present site of Sioux City, Iowa. It was near here that Sergeant Charles Floyd died of what is believed to have been a ruptured appendix. Sergeant Floyd was the only fatality of the Expedition.
The Expedition passed the mouth of the Great Sioux River on August 21, and on August 22, Captain Lewis became ill after inhaling cobalt fumes from a cliff he was examining. On August 23, Captain Clark and one of the men killed their first buffalo near the camp which was in present-day Dixon County, Nebraska.
From August 28 to 31, they made camp at Calumet Bluff on the Nebraska shore near what is now the south edge of Gavins Point Dam. It was here that a lengthy and important council was held with a delegation of Sioux Indians comprising five chiefs and 70 men and boys. On September 1 the Expedition camped at the lower point of Bon Homme Island, between Bon Homme County, South Dakota, and Knox County, Nebraska. September 3, camp was made near Plumb Creek on the Nebraska side. Beaver lodges were observed in great numbers on the river at this point. On September 4 the party camped just above the Niobrara River on the south side and on September 7 the Expedition entered what is now South Dakota.
Some of the Lewis and Clark Expedition's camp and council sites located along the Missouri River in Nebraska can be visited today. One of the most important sitesthe Council with the Oto and Missouri Indiansis located approximately 13 miles north of present-day Council Bluffs, Iowa, near the town of Fort Calhoun, Nebraska. The military post of Fort Atkinson, 1819-1827, which later occupied the site of the original "Council Bluffs," is now a Nebraska State Park project. The burial place of the great Omaha chief, Blackbird, visited by Lewis and Clark in 1804, can be visited near the town of Macy.
Present-day explorers can also visit several fur trading posts and military forts in Nebraska along the route taken by the Expedition. The Lewis and Clark Trail is intersected by the Mormon Trail at Omaha and branches of the Platte River route to California and Colorado began at Plattsmouth and Nebraska City, also on the Missouri.
Much has been done to preserve some of the historic sites in Nebraska and much remains to be done. The Fontenelle Forest, a 1,300-acre virgin forest south of Omaha, has been designated a Natural History Registered Landmark. The Leary Site near Rulo and Walker Gilmore Site near Murray (both prehistoric Indian remains) have been approved for Registered National Historic Landmark status.
In addition to historic sites, Nebraska has much to offer the fisherman and the hunter. The terrain along the Missouri River shore in Nebraska is mostly flat with some bluffs and numerous islands and marsh lands and wooded areas. Many of the natural oxbow lakes, such as Carter Lake, are being used now as recreation areas and provide excellent warm-water fishing. This area also provides good habitat for white-tailed deer and some upland game bird hunting. Thousands of ducks and geese follow the Missouri River each spring and fall and provide a major recreation resource for Nebraskans. One fish and wildlife area, the 7,800-acre De Soto National Wildlife Refuge, is being developed by the Bureau of Sport Fisheries and Wildlife. New recreation regulations, designed to permit wider use of National refuges, have been promulgated by the Department of the Interior. The regulations permit increased public recreation use where it is compatible with the primary conservation purpose of an area.
Nebraska's hot, humid summers make water-based recreation especially desirable. Winters are usually too variable to provide for extensive winter sports, but the State has beautiful and enjoyable spring and autumn seasons. At this time of the year visitors to such areas as Lewis and Clark Lake make good use of the reservoir for fishing, boating, swimming, and water-skiing, and picnicking and overnight camping are enjoyed along the shore.
Although typically a plains State, Nebraska boasts two National forests. It has beautiful State Parks, two National monuments, a National historic site (Chimney Rock), year-round fishing, excellent upland game bird hunting, and a host of rodeos, frontier forts, Indian pageants, and four Indian tribes. The Winnebago, Omaha, Ponca, and Santee reservations are located along the Lewis and Clark Trail.
Within 25 miles of the Trail there are 39 existing and 27 proposed points of recreation interest with a total of 18,659 acres. These include prehistoric Indian village sites, trading posts, State parks and recreation areas, a number of historic sites and a Federal dam and reservoir. Nineteen areas provide water-based recreation and 24 additional water-based recreation areas are proposed for development by State and Federal agencies. Existing facilities provide opportunities for camping, picnicking, hunting, hiking, horseback riding, sightseeing, and nature study.
A detailed list of existing and proposed points of recreation and historic interest along the Trail and pertinent information concerning each are found in the tables on pages 128 to 134 Maps accompanying the tables show the location of all the areas.
The rising demand for outdoor recreation in Nebraska is closely associated with the trend toward urbanization. Although the State's total population increased only 6.5 per cent between 1950 and 1960, both the cities of Omaha and Lincoln grew by more than 26 per cent.
The state's population is expected to continue to grow at about the same rate. In 1960 Nebraska's population was 1,411,330. By 1976 it is expected to rise to 1,719,000 and by the year 2000 to reach 2,368,000. Approximately 35 per cent of the people live in the counties bordering the Missouri River and the cities of Omaha, population 301,598, and Lincoln, population 128,521, are less than an hour's drive from the Trail. Thus the Missouri river is a major recreation resource close at hand for two thirds of the State's people.
Virtually all of the existing recreation facilities along the Missouri River are receiving heavy use. From 1958 to 1963 the total annual recreation visitor-days on reclamation projects in the state increased 50 per cent, from 479,914 to 719,000. Over the same period, visitor-days on Corps of Engineers projects increased 127 per cent, from 280,000 to 637,000.
Highway construction plans and average daily highway traffic flows also indicate that demand for outdoor recreation will continue to rise. Interstate 80, one of the major east-west highways, bisects Nebraska and will carry a great portion of the east-west traffic through the Midwest. Its completion will directly connect the large urban centers of the Great Lakes region and the Nebraska portion of the Lewis and Clark Trail. Interstate 29, although in Iowa, will follow the Iowa-Nebraska border and will channel considerable traffic from the large urban centers to the south into the Omaha-Council Bluffs area. In 1963 U.S. Highway 73, which parallels the Missouri River through Nebraska for nearly three-fourths of the river's length, had a 24-hour traffic flow averaging in excess of 2,000 vehicles between Omaha and the Nebraska-Kansas border. North of Omaha, U.S. Highway 73 carried a traffic load of between 1,500 and 2,000 vehicles per day. East-west Interstate 80, leaving Omaha, had a recorded traffic flow of between 5,000 and 7,000 vehicles.
Recreation development planning along the river in Nebraska must therefore consider a great deal of transient, nonresident users. The completion of the Federal Interstate Highway System will be an important factor in determining the effective supply of, and demand for, recreation facilities in Nebraska. Moreover, the completion of the Lewis and Clark Trail program and the adequate promotion of the Trail will, in itself, increase recreation demands along the Missouri River. How much of this recreation demand will occur at historic, wildlife, and other recreation sites along the Lewis and Clark Trail will depend largely on the interest aroused in the Trail and on the quality of the effort made to identify, mark, and develop areas along the route.
The Omaha Indian Reservation offers an especially attractive opportunity to provide outdoor recreation for regional needs which can also improve the tribal economy. The Omaha Tribe played an important role in the Lewis and Clark Expedition and a major role in the history of the region which can be presented to the public in interpretative displays and programs.
The Omaha Reservation is ideally suited and situated for outdoor recreation use. It lies in one of the few areas along the Missouri River that contains rolling, wooded, scenic terrain close to the river which is untouched by man-made developments. Fishing, water sports, and historic and archeologic interpretation would be made available without impairing the natural beauty, of the setting. With proper financing, planning, and management, the contribution of the reservation to the recreation resources of the Lewis and Clark Trail could be invaluable.
A Development Plan for the Omaha Reservation has been prepared with the assistance of the Aberdeen Regional Office of the Bureau of Indian Affairs. Portions of the plan have been initiated.
The Aberdeen office also has undertaken an ambitious long-range plan to guide the development of non-commercial and commercial recreation on several other reservations under its jurisdiction. Three reservations in Nebraska, the Winnebago, Ponca, and Santee, are included in the program. The plan is designed to establish policies, principles, and procedures which will develop sound recreation planning to meet the needs of the various tribes and the recreation demands of non-Indians.
In establishing the regional recreation plan, the Bureau of Indian Affairs is stressing the advantages of tying the recreation aspects of the reservations to a collective regional recreation unit. Such a collective arrangement would increase purchasing power and permit adequate advertisement, merchandising, promotion, and overall professional management of the various recreational complexes.
A recreation development plan for additional areas along the Missouri River is being prepared by the Omaha District Office of the Corps of Engineers. This report, to be entitled "A Preliminary Recreation Master Plan for the Missouri River, Rulo, Nebraska, to Sioux City, Iowa," is due for completion in the near future. It will be a companion to a similar report for the Missouri River from Rulo, Nebraska, to St. Louis, Missouri, published in March 1964 by the Corps of Engineers.
The Corps' plan will call for the creation of numerous small public-use recreation areas along alternate banks of the Missouri River. In the first report, these public-use recreation areas were selected in the vicinity of the Lewis and Clark campsites of 1804. In many cases in Nebraska, locating or marking actual campsites is impossible. In such instances, the plan calls for erecting appropriate markers in nearby public use areas. Each area would be provided with an access road, parking, camping spaces, water, and sanitary facilities. Boat ramps and group shelters also would be provided in most instances.
The city of Omaha has initiated a program to acquire and develop a 1,000-acre park which would include five to six miles of Missouri River frontage just east of the present levee surrounding Eppley Airfield. In the vicinity of the park is the Expedition campsite of July 27, 1804. The proposal is set forth by the Omaha Chamber of Commerce, Park and Recreation Committee, in a publication entitled "A Proposed River Park for Metropolitan Omaha."
1. The State Boundary
A long-standing boundary dispute between Nebraska and Iowa has been a serious handicap to the development of the Missouri River's recreation potential in both States. Failure to compromise on a new State boundary based upon the new stabilized channel of the river has resulted in a legal problem that has severely retarded recreation expansion.
Historically, the boundary between Nebraska and Iowa has been the center of the channel of the Missouri River. Because of the natural changes in the channel and further alterations occurring from channel stabilization by the Corps of Engineers, the river no longer follows the old river bed. Thus a redefinition of the States' boundary became necessary.
In 1943, Nebraska and Iowa compromised on a new boundary and defined it to be the center of the channel as shown on certain alluvial plain maps of the Missouri River. Since the second World War, additional channel work has been required. As a result, some 40 miles of the river now lie wholly within the State of Nebraska because the State boundary did not change with the location of the new channel and the new channel does not follow the maps adopted in the 1943 compromise. Moreover, several thousand acres of land and water thus legally within the State of Nebraska lie east of the channel, while certain Iowa lands and waters lie west of the new channel.
Much of the affected land has high recreation potential. Some areas are oxbows, cut off by the channel work. Including both Iowa and Nebraska lands, they generally lie east of the channel. If Iowa were to develop these oxbows for recreation, the State's funds would be expended for development that mainly would benefit citizens of Nebraska. The Nebraska legislature, however, has not authorized use of the power of eminent domain to acquire any of these areas through condemnation. Moreover, where an oxbow is cut off from the river, a Nebraska resident would have to enter the area over Iowa ground, creating problems in wildlife law enforcement. Reciprocal fishing regulations have been established, however, that are satisfactory to both Nebraska and Iowa. To resolve the boundary dispute members of the Nebraska legislature have been appointed to a committee to meet with a similar committee in Iowa to work out mutually acceptable solutions. Both committees have recommended to their respective legislatures that the boundary between the States be established at the median line, or middle of the Missouri River, as it is now stabilized by the Corps of Engineers, and that the boundary remain the middle of the stabilized channel, as determined by any future changes through channelization work.
The Governors of Nebraska and Iowa have concurred in the above recommendations and have asked their respective legislatures for affirmative action. Neither legislature has accepted the recommendations.
2. Land Ownership Laws
A difference in land ownership laws in Iowa and Nebraska poses complex problems concerning public acquisition and recreation improvements in certain areas. Lack of information on exact ownership boundaries prevents both Iowa and Nebraska from acquiring lands needed for access to water or for shoreline development.
In Nebraska the law provides that riparian owners have title to the bed of the river to the center of the channel, or to the described boundary line, whichever the case may be. Thus, all lands in a proposed project area lying west of the Iowa boundary but east of the new channel are in Nebraska and owned privately. If the State of Iowa were to purchase such land for project improvements, it would mean that Iowa would own lands in another State.
In Iowa all lands below the mean high-water mark and the center of the channel, or described boundary line, are State property. Thus it is conceivable that Iowa could sell lands to Nebraska owners that lie west of the new channel. Islands and meandered streams also are held to be the property of the State of Iowa. Private individuals, however, have contested the right of the State to own bottom lands under this law and have brought the State of Iowa to court tests. In at least one decision, the courts have declared islands to be State owned. Settlement of title to such lands involves the slow, but necessary, actions of the courts.
3. Pollution of the Missouri River
Pollution, an historic problem of the Missouri River, remains an important barrier to recreation use of this great waterway. The dumping of sewage wastes, especially paunch manure from the Omaha packing industry, continues, although Federal and State laws are attempting to curtail such actions.
A basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark has been outlined on page 20 in the Recommended Program. The recommended routing of a Lewis and Clark Trail Highway in Nebraska is indicated on maps 4-7. Specific recommendations relating only to Nebraska follow:
1. The more important sites along the Missouri River associated with the Lewis and Clark Expedition should be protected and interpreted for public use; appropriate sites should be accorded National Historic Landmark status.
2. Recreation features of the Omaha Reservation Development Plan should be completed as promptly as possible.
3. The regional recreation plan for the Indian reservations should include appropriate recreation, historic, and archeologic sites which can be identified with the Lewis and Clark Trail.
4. Completion of the recreation development plans of the Corps of Engineers for the Missouri River from Rulo, Nebraska, to Sioux City, Iowa, should be expedited. Insofar as possible, Lewis and Clark Expedition campsites should be chosen for protection, interpretation, and development for public use and enjoyment. State and local agencies able to participate in the development program should be prepared to give prompt attention to the proposals.
5. The long-time boundary dispute between Nebraska and Iowa should be resolved through the joint acceptance of the inter state commissions' recommendations by the Iowa and Nebraska legislatures.
6. The Nebraska and Iowa Interstate Boundary Commissions should study the problems arising from the differences in state land ownership laws and present recommendations for legislative action to make the respective legal changes necessary to resolve these problems.
7. Pending the resolution of legal complications, the numerous islands in the Missouri River flood plain which possess possibilities for recreation development should be preserved and protected in their primeval state. Their permanent management in the public interest should be planned as soon as possible.
8. Because future channel work will cut off many oxbows that possess high potential as public recreation areas; efforts should be made to protect these oxbows from sand-carrying river flows by the strategic placement of impervious levees at their upper or lower ends, or both.
9. The proposed river park for metropolitan Omaha should be given early and serious consideration.
10. Consideration should be given to the development of Dodge Park, along the Trail in Omaha.
The route of Lewis and Clark through South Dakota has been dramatically transformed into a string of giant reservoirs referred to as the Great Lakes of South Dakota. These huge impoundments have harnessed the river, eliminating the unpredictable floods that were long a menace to this section of the river valley. Water backed up by this series of dams already has inundated most of the Missouri River bottomlands of South Dakota, covering many historical locales and thousands of acres of prime wildlife habitat.
The Lewis and Clark Expedition travelled through the South Dakota area on its outward journey and on its return. Going west, the Expedition spent 53 days here. Exactly two years later, and home-ward-bound the Expedition reentered the area. Only fourteen days were spent here as they hurried past the troublesome Teton Sioux on their way to St. Louis. The visitor to South Dakota can still visit some of the sites described in the Lewis and Clark journals.
Total visitors to the Great Lakes of South Dakota number nearly 2.5 million annually; completion of the road system is expected to raise the annual visitation figures to 10 million. The number of tourists in South Dakota now is estimated at approximately four million. By 1970 the tourists volume is expected to be between five and six million people annually and tourists expenditures are expected to reach $700 million.
South Dakota has a great deal to offer the vacationing tourist as well as the State resident. The most attractive area naturally is the Great Lakes of South Dakota and intervening stretches of the Missouri, for water-based recreation is always appealing. Facilities now available are patronized well at present and the trend is toward even greater use. Along the Lewis and Clark Expedition route in South Dakota are 135 recreation sites totaling 22,809 acres. Of this number 90 sites are now available for use. The rest are proposed for development by various State and Federal Agencies.
A 10-year program to provide road access to the Missouri River reservoir shoreline has been developed and construction has begun. This program will include scenic routes down both sides of the reservoirs, as well as access routes to the recreation areas. With the completion of this road network and the continued development of recreation facilities, the Great Lakes of South Dakota easily will constitute the most extensive water-based recreation portion of the entire Lewis and Clark Trail.
Although South Dakota has much to offer the visitor interested in historic and archeologic sites and other recreation attractions, more facilities are needed. Implementation of the plans prepared by several State and Federal agencies for the development of recreation areas along the reservoirs should go a long way toward meeting these needs.
The Lewis and Clark Expedition had been on the river exactly three months when they entered the South Dakota area. The first camp on August 21, 1804, was approximately four miles above the Big Sioux River but on the Nebraska side.
The next day, Captain Lewis became ill after closely examining the minerals in Nicollet's Dixon Bluffs. He "was near being poisoned" when he smelled the fumes of the minerals. He identified them as alum, copperas (ferrous sulfate), cobalt, pyrite. From this experience came the theory that these minerals might have caused the men's stomach disorders suffered since the party had passed the Big Sioux River.
The next day the Expedition camped at Elkpoint, so named because of the elk found there. On August 23 the first buffalo was killed near camp and two barrels of meat were salted. The following day the discovery of a bluff "too hot for a man to bury his hand in the earth at any depth," and buffaloberries, which made "delightful tarts," was noted in the journals.
Lewis and Clark made a side trip, while in this vicinity, to see for themselves a mound which was regarded with awe by all the nearby Indian tribes. It is known as "Spirit Mound," located in present-day Clay County. The Indians believed the mound to be inhabited by little devils in human form about 18 inches high, with large heads, and armed with sharp arrows. After a tortuous walk in excessive heat, the exploring party reached the mound and climbed it. On it they found only a "multitude" of birds, to which they ascribed the Indians' superstition. Heat and thirst forced them from the hill about 1:00 p.m. The next day, August 26, they obtained several elk and deer, "jurked" the meat, and wove a new tow rope from the hides. Camp was made at Audubon's Point and the prairie set on fire as a signal for the Sioux to come to the river.
Upon passing the James River, the Expedition made its first contact with the Sioux. Sergeant Pryor went up the James to the Yankton Sioux village and returned with five chiefs and 70 men and boys. On August 28, they started a four-day stay at Calumet Bluff, where they erected a flag pole. The next day, under a large oak tree near the flag, a council was held at which Captain Lewis delivered a speech and gave presents to the Indians.
The chief received a richly laced uniform coat of the United States Artillery, with a cocked hat and red feather. The peace pipe was smoked and the chiefs retired to divide their presents. The following day, the grand Chief Shakehand spoke at some length, approving what Captain Lewis had said and promising to make peace with the Oto-Missouri Indians. The chief remarked that white men so far had given him only medals and very little clothing, and he desired something for his women and children, who had no clothing.
Many of the lesser chiefs also spoke, describing the distress of the Nation and begging for pity and for traders to be sent them. They wanted powder and ball and a supply of their great father's "milk," later to be called "firewater" by the red man. The journals for the days spent at Calumet Bluff (very possibly a part of Gavins Point Dam) contain many interesting comments. The Indians were described as "stout, bold-looking people. The young men hansom and well madeverry much deckerated with paints, porcupine quills and feathers, large leagins and mockersons, all with buffalo roabes of different colours; squars ware peticoats and a white buffalo roabe."
By September 1 the Expedition had arrived at Bon Homme Island. There they discovered and mapped some ancient works known today to be water-made sand dikes.
"The Tower," a famous landmark, was reached on September 7, and the day was spent in carrying water from the river to drown out a prairie dog for a "specimen."
On September 8 the Expedition passed the Trudeau or Pawnee House, built in 1795, and Fort Randall site, camping on Big Cedar Island. The next day, on a hill to the south, the 45-foot backbone of a fish was found in a perfect state of petrification. These petrified bones later were determined to be the tail of a plesiosaurus; the relic eventually was placed in the Smithsonian Institution in Washington, D.C.
On September 11 George Shannon rejoined the party after having been missing since August 28. He had been sent out in search of two horses, and had been following the bank of the river ahead of the party for 16 days.
On September 14 the southern shore was searched all day, in vain, to find an ancient volcano which the captains had heard at St. Charles was somewhere in the neighborhood. This "volcano" was in reality a burning bluff; it is located near Wheeler Bridge.
The mouth of the White River was reached on September 15. Here Captain Clark saw and shot his first antelope. On September 16 and 17 the Expedition paused in Pleasant Camp near American Crow Creek, to rest and recuperate.
On September 19 favorable winds carried the group past the "Three Rivers of the Sioux"Crow, Elm, and Campbell Creeksand on to the gorge at Big Bend. The following night a near disaster occurred for the Expedition when the bank beneath which the men were sleeping caved in and nearly swamped the boats. On September 21 the Expedition passed the trading post of Registre Loisel, the only white man for hundreds of miles around, and camped at Chapelle Creek.
On September 23 three boys of the Teton Sioux Nation swam the river and informed the party that two groups of Sioux were camped on the next river. The following day, camp was made on an island in the river, 70 yards out from the mouth of Bad River near the present site of Pierre. At noon the next day the party met on shore with the Indians. The captains gave the Indians one-fourth of a glass of whiskey apiece. They grew very insolent. The chief, Black Buffalo, ordered his men to hold the pirogue and one leaped on board and hugged the mast. The Indians jostled Clark and he drew his sword and signaled the larger boat to ready its small cannon. At this show of force, Black Buffalo called off his men. Clark then rejoined Lewis and the Expedition continued up the river. On succeeding days council was held with the Indians not far below the present Oahe Dam.
On October 8 the Expedition passed the Grand River and reached the Arikara villages, located about eight miles due east of present Wakpala. Here some time was spent with the Indians in council and gifts and food were exchanged. On October 13 a small creek on the south was named Tocasse (now Kunktapa) in honor of the chief of the second Arikara village.
John Newman was tried in October 14, found guilty of "mutinous expression," and sentenced to 75 lashes. A nine-man court of privates pronounced the sentence. The party halted on a sand bar and after dinner the sentence was executed. Newman was discharged as a member of the permanent party and sent back to St. Louis in the spring of 1805. This was the first judicial punishment carried out in what is now the State of South Dakota and perhaps the first and last legal flogging. The next day, October 15, the Expedition entered what is now the State of North Dakota.
On August 21, 1806, just two years after Lewis and Clark first entered South Dakota at the Big Sioux River, they again entered the State on their return journey. Two days were spent at the Arikara villages; food supplies were down, so corn was bought there. The men now depended almost entirely on hunting for their food.
Proceeding down river, the Expedition camped on sand bars and hurried by the troublesome Tetons. The captains found the flag pole at Calumet Bluff still up. They passed a trading post on the James River that a Robert McClellan had erected and abandoned in the time they had been in the West.
At 11:00 a.m. on September 4, 1806, the Expedition left what is now South Dakota. Fifty-four days had been spent there, outward bound, and 14 days on the return trip.
The Expedition had crossed South Dakota by the most obvious and practical route, the Missouri River, which meanders down the center of the State through rolling prairie land.
Long before Lewis and Clark, the Verendrye brothers, first known white men to visit South Dakota (1743), had envisioned that the river would lead them across the vast unknown to a western sea. After Lewis and Clark disproved this myth, the beaver trappers came by the hundreds in keel boats, pirogues, and dugout canoes and established trading posts along the river banks. South Dakota was much involved in the fur trade on the upper Missouri. The trade in wild furs, especially beaver and buffalo, lasted some 40 years.
Later came the steamboats that carried thousands of emigrants to the newly opened Northwest Territory. The "canoes that walked on the water" navigated the river as far as Fort Benton, Montana. Later the Missouri River became the highway for thousands of gold seekers headed for the gold fields of Montana, Idaho, and the Black Hills. Today the fabled route of Lewis and Clark in South Dakota has been transformed into the Nation's longest chain of lakes, formed by four giant dams across the Missouri River.
The route of Lewis and Clark through South Dakota via the Missouri River abounds with a rich array of historic sites and affords an outstanding potential for outdoor recreation. The visitor will find that several of the sites described by Lewis and Clark can still be observed. Nicollet's Dixon Bluffs, where Captain Lewis became ill, are just across the Nebraska-South Dakota border. Spirit Mound, mentioned in detail in the journals, is located in Clay County.
Other sites which can be visited have been altered considerably since 1804-06. Calumet Bluff, the scene of a four-day council with the Indians, is now a part of Gavins Point Dam. The city of Pierre now stands where a near fatal clash with the Teton Sioux almost caused the loss of the entire Expedition.
Although the famous Indian-woman guide of the Expedition, Sacagawea (also spelled Sacajawea and Sakakawea), did not join the explorers until they reached North Dakota, a suitable marker to the bird-woman has been erected on U.S. Highway 12, on a hilltop west of the Missouri River near Mobridge, South Dakota.
Just three miles west of Mobridge is the grave of Sitting Bull. The famous Sioux leader was killed near there in 1890. An appropriate marker interprets the burial site.
Tourists following the Lewis and Clark Expedition route through South Dakota may visit six Indian reservationsYankton, Rosebud, Lower Brule, and Crow Creek, all lying south of Pierre; and Cheyenne River and Standing Rock, north of Pierre. Ancestors of the present Indians held councils with Lewis and Clark. The South Dakota Historical Society, with the assistance of private contributions, has erected many historic markers throughout the State in a graphic and appealing manner. Although a few historic and archeologic sites along the Missouri have been marked and developed, the Historical Society, lacking sufficient funds and authority, has been unable to develop many of the sites which have not been inundated by the reservoirs.
Some prehistoric Indian sites have been approved for Registered National Historic Landmark status by the Department of the Interior. These include: The Arzberger site, near Pierre; Fort Thompson Mounds, near Fort Thompson; Crow Creek site, on Fort Randall Reservoir; Langdeau site, near Big Bend; and Malstad Village near Mobridge.
Recreation opportunities are many and varied. Tourists and residents alike are drawn to the many excellent natural fishing lakes, the Great Lakes created by impoundment of the Missouri River, the National Monuments, and National and State parks and forests throughout the State. South Dakota is internationally famous as the pheasant capital of the Nation. Hunters from distant States and Canada harvest some three million ringnecks annually. Excellent upland game hunting is found in many counties along the river valley; deer are hunted along the river bottom and pronghorn antelope are harvested through restrictive management procedures along the plains adjacent to the river. Waterfowl hunting is a major fall outdoor pursuit.
There are two National Wildlife Refuges along the Trail in South Dakota. The Lake Andes Refuge is located at Lake Andes, six miles north of the Fort Randall Dam. Pocasse National Wildlife Refuge is located near Pollock, a few miles south of the North Dakota State line. These refuges provide opportunities for wildlife observation, photography, sightseeing, interpretive programs, fishing and hunting, picnicking, swimming, and boating. New recreation regulations have been promulgated by the Department of the Interior permitting wider use of National refuges and other Federal wildlife conservation areas where it is compatible with the primary conservation purpose of an area.
The "Great Lakes of South Dakota" form an important asset to this plains State. They provide water for electrical power, irrigation, and downstream navigation, and are creating one of the Midwest's greatest water sports areas. The lakes can be reached in less than four hours' driving time from the State's two largest citiesSioux Falls and Rapid Cityand from Sioux City, Iowa.
Beginning at the Nebraska-South Dakota State line is Gavins Point Dam at Yankton, completed in 1957, impounding a lake 37 miles long and appropriately called Lewis and Clark Lake. Just upstream is Fort Randall Dam at Pickstown, completed in 1956 and forming a 140-mile-long reservoir. At the upper extremity of this reservoir, Big Bend Dam is under construction at Fort Thompson; it is scheduled for completion in 1966. Big Bend will make a lake 80 miles long, reaching almost to Pierre. Just upstream from Pierre is Oahe Reservoir, completed in 1963, which creates a reservoir 250 miles long. These lakes provide a shoreline almost as long as our Pacific coast, and impound some 1,000 square miles of water.
Recreation use of these reservoirs includes motorboating, sailing, swimming, water skiing, skin diving, and excursion-boat tours to what remains of the historic sites. Thirty-three varieties of fish are caught in their waters, providing some of the finest lake fishing in America. Motels, campsites, and other recreation areas are within easy access of the lakes.
An inventory of historic, wildlife, and other recreation sites revealed a total of 135 sites within about 25 miles of the Lewis and Clark Trail in South Dakota. Ninety are existing and 45 are proposed. Forty-six existing areas provide water-based recreation and 29 additional water-based recreation areas are proposed for development by State and Federal agencies on the Missouri River. The total amount of land and water included within the recreation sites is 22,809 acres. All forms of water-oriented recreation, and camping, picnicking, hunting, hiking, horseback riding, sightseeing, and nature study are provided. A detailed list of existing and proposed points of recreation and historic interest along the Trail and pertinent information concerning each are found in the tables on pages 132 to 138.
Tourism is important in South Dakota. The State's tourist volume rose from 1.2 million in 1946 to 2.3 million in 1955, with tourist expenditures rising during this same period from $66 to $90 million. The number of tourists now is estimated at approximately four million. Each stays within the State an average of just over four days and they spend an estimated $150 million annually. By 1970 the South Dakota Industrial Development Expansion Agency predicts a tourist volume of 5-6 million people annually, with an average stay of 6-8 days and an expenditure of $700 million.
Recreation use of Bureau of Reclamation projects increased from 644,732 visitor days in 1958 to 1,031,000 visitor days in 1963. On Corps of Engineers projects, visitor days increased from 2,017,000 in 1958 to 2,417,000 in 1963. Visitor days for the State recreation areas from 1958 to 1962 has steadily climbed from 3,806 to 5,235.
Transient users are playing a prominent role in South Dakota's outdoor recreation, far overshadowing the local users. In 1962 the Bureau of Reclamation reported that 79 percent of the recreation utilization on its projects in South Dakota came from other than local users.
The increasing population of South Dakota will place a greater demand on recreation facilities. The 1964 population of South Dakota was approximately 682,000. The projected population is 796,000 for 1976 and 1,083,000 for the year 2000.
South Dakota is traversed by considerable traffic bound for the Northwestern section of the United States and such population centers as Minneapolis, St. Paul, and Chicago. The State will shortly be crossed by two Interstate highways. Northbound traffic from Kansas City, Omaha, Council Bluffs, and Sioux City will enter the State on Interstate 29, and westbound traffic will enter near Sioux Falls on Interstate 90.
The annual average 24-hour daily traffic flows give some indication of the potential recreation demand. A major tourist traffic route spanning the State is the east-west U.S. 16, which will later be replaced by Interstate 90. From Sioux Falls to Rapid City, this highway carries an annual 24-hour average traffic flow of approximately 1,800-2,500 vehicles. Approximately 250-300 of these vehicles constitute commercial traffic. The next most heavily traveled east-west through highway is U.S. 14, with a total traffic volume between 1,300 and 1,400 and a commercial volume of less than 200. U.S. 12, which crosses the Missouri River near Mobridge, carries a total traffic flow between 1,000 and 1,100 vehicles.
An important north-south through highway, in the vicinity of the Missouri River, is U.S. 83, which carries a traffic volume averaging between 500 and 800 vehicles. State Highway 50 follows the Missouri River from Vermillion to Yankton and finally to Chamberlain. Its volume is quite heavy to Fort Randall Dam, where it drops off sharply, with the traffic flow continuing east on U.S. 18. Interstate 29, between Sioux City, Iowa, and Sioux Falls, South Dakota, carries a total traffic flow in excess of 2,000 vehicles per 24-hour period.
Important to the historic and recreation enhancement of the Trail are the six Indian reservations which lie along the route of the Expedition through South Dakota. These reservations and the various tribes have played a vital role in the history of this region and the Expedition itself. The Yankton Indian Reservation in Charles Mix County, for example, has many historic sites associated with the Lewis and Clark Expedition. The Expedition camped on what is now the Yankton Reservation on September 5, 6, and 10, 1804, and on August 30, 1806. Plans for development of recreation facilities on Indian lands bordering the Trail are under way.
The Aberdeen Regional Office of the Bureau of Indian Affairs has undertaken a long-range plan to guide noncommercial and commercial recreation development on the reservations under its jurisdiction. The objective of this plan is to establish policy, principles, and procedures that will develop sound recreation planning to meet both the needs of the various tribes and the recreation demands of the non-Indian. The plan will stress the advantages of tying the recreation aspects of the reservations to a collective regional recreation unit. Such collective arrangement has the advantage of added purchasing power, advertisement, merchandising, promotion, and overall professional management of the various recreation complexes.
The National Park Service, at the request of the Cheyenne River Sioux tribe, has developed a general recreation development plan for an area adjacent to the western termination of the bridge carrying U.S. Highway 212 across Oahe Reservoir. Detailed plans for the financing, development, and construction will be undertaken shortly. The area is already a popular fishing spot, despite a lack of basic recreation facilities.
A large recreation complex is proposed near Big Bend Dam. The plan, prepared by Harland and Bartholomew and Associates, proposes an investment of over $2 million in the Councilor Creek Bay Area. A motel, marina, restaurant, home sites, and several hunting areas on the Lower Brule and Crow Creek Reservations are included. Consideration was also given to preservation and development of historic and archeologic sites and an interpretive program for tourists within the reservations. Plans call for the two tribes to form a joint corporation or enterprise to carry out this project. This development could well become one of the major recreation attractions along the Lewis and Clark Trail in this area.
To meet the pressing problem of dwindling unspoiled recreation resources and the mounting need of residents and tourists alike, the State Game, Fish, and Parks Department has undertaken a 20-year land-acquisition program.
The whole composition and complexion of recreation use on the "Great Lakes" will be intensified by the completion of a perimeter road system. A 10-year, 1,000-mile road program to open the extensive shoreline of the lakes is now under construction. The system will include scenic routes down both sides of the reservoirs, plus access roads to all recreation areas. These perimeter highways and access roads will open up vast areas to recreation and business development and greatly increase the State's tourist volume.
The Corps of Engineers has developed extensive plans for public-use recreation areas along the reservoirs in their recreation master plans. Data concerning proposed and existing sites are included in the inventory and the locations are shown on the maps accompanying this report.
A development program for the historic, wildlife, and recreation resources along the Lewis and Clark Trail appears in the Recommended Program, page 20. The recommended routing of a Lewis and Clark Trail Highway in South Dakota is indicated on maps 6-9. Recommendations relating specifically to South Dakota follow:
1. Additional camping, boating, fishing, and swimming facilities should be developed below the Fort Randall Dam.
2. The regional recreation plan for the Indian reservations prepared by the Bureau of Indian Affairs should include recreation, historic, and archeologic sites which can be identified with the Lewis and Clark Expedition route.
3. Preparation of detailed plans for the financing, development, and construction of a recreation area on the Cheyenne River Reservation should be completed as soon as possible.
4. Plans for a recreation complex near Big Bend Dam on the Lower Brule and Crow Creek Reservations should include historic, and archeologic sites which can be identified with the Lewis and Clark Expedition route.
5. The State Game, Fish, and Parks Department should make a concerted effort to develop recreation sites along the Lewis and Clark Trail.
6. The 10-year, 1,000-mile perimeter road system on the "Great Lakes"should be completed as promptly as possible.
7. Roadside parks and overnight facilities should be developed along the "Great Lakes" perimeter road system by the agencies involved.
8. Plans of the Corps of Engineers for public-use recreation areas around the reservoirs should be implemented as soon as possible, giving priority to those sites associated with the Lewis and Clark Expedition.
9. The already existing Bureau of Outdoor Recreation Technical Coordination Committee should form the nucleus of the State Lewis and Clark Trail Committee and should undertake the development of an educational program for the Lewis and Clark Trail in South Dakota.
North Dakota's history teems with events connected with the opening of the Northwest. Here Lewis and Clark spent the winter of 1804-05, their longest sojourn in any of the future States, and built Fort Mandan north of present-day Bismarck. Here Sacagawea and her husband, Charbonneau, joined the Expedition. Military forts later were set up along this section of the Missouri. The fur-trade flourished here for over 40 years. Mandan Indian villages were located along this reach of river and its tributaries, providing Lewis and Clark with abundant opportunity to study the way of life of the American Indian, one of the specific tasks assigned to the Expedition.
Traces of North Dakota's historic past may still be discerned along the Missouri. Although the waters of Oahe Reservoir and of 200-mile-long Garrison Reservoir have destroyed many historic sites, interested people still may visit the site of Fort Mandan where the Expedition wintered and the ruins or sites of numerous Indian villages described by Lewis and Clark in their journals. Military forts, and fur trading posts are also found along the Trail. The Indian tribes which played such a prominent role in the history of this area are represented by their descendants, now mostly on reservations in the State. Two of theseStanding Rock and Fort Bertholdlie along the Trail.
A study of North Dakota's recreation potential shows that it is great, but far from being fully developed. Water-based recreation particularly shows promise: Garrison Reservoir alone offers some 1,500 miles of shoreline. The section of the Missouri extending south of the reservoir as far as Bismarck presents the longest single section of Missouri River bottom land in the Dakotas which is still in its original natural state.
Many types of water-based recreation can be enjoyed in this State. Waterfowl hunting is second to none in the Nation, for North Dakota leads in waterfowl production. Fishing, too, is outstanding here. Boating and related activities on Garrison Reservoir are excellent. These possibilities, combined with the historic, geologic, and archeologic points of interest, make the recreation picture an attractive one.
Within about 25 miles of the Lewis and Clark Trail in North Dakota 77 recreation sites exist and 22 more are proposed. Many provide water-based recreation. Several of the sites proposed for development by State and Federal agencies are on the Missouri River and its reservoirs. The total area of land and water within the 99 sites is over 72,000 acres.
Existing recreation facilities in North Dakota are well used. Attendance at North Dakota's State parks has nearly doubled during the past decade, and all attendance records for State and Federal recreation areas show a steady and continual increase.
Even more use of the State's recreation facilities is expected when the new highways begin to function. Interstate Highway 94 will channel considerable out-of-State traffic to the vicinity of the Trail from the east (the Minneapolis-St. Paul region) and from the west (Billings and other urban areas of Montana). This new highway undoubtedly will receive heavy use both from the Fargo area and from traffic diverted westward from Interstate 29, funnelling it toward the Lewis and Clark area. Perimeter road networks at Garrison and Oahe Reservoirs will help to guide people to recreation areas along the Trail if proper markers and directional information are provided.
Considerable efforts are being made to establish additional recreation areas and to meet the increased future use of areas along the Missouri River. Spearheading these efforts are the North Dakota State Historical Society, North Dakota Game and Fish Department, Corps of Engineers, and Bureau of Indian Affairs.
The recommended routing of a Lewis and Clark Highway in North Dakota is indicated on maps 9-12. Future development of recreation areas should include the Corps of Engineers' comprehensive plans for the Garrison and Oahe Reservoirs, and the Bureau of Indian Affairs' regional recreation plan for the Indian reservations.
The Lewis and Clark Expedition spent more time in North Dakota than in any other State through which it passed. Going west it entered the State on October 14, 1804, and departed on April 27, 1805. Homeward bound in August 1806, the Expedition spent only 10 days there.
The outward bound Expedition encountered great numbers of antelope along the banks of the river on its second day in the area. On October 18 the group reached the mouth of Cannonball River. The next day, in what is now Burleigh County, some 52 herds of buffalo and three herds of elk were counted in a single view. The captains also met several traders, mostly Frenchmen, along the upper stretches of the river.
The men continued upstream, observing deserted Mandan villages and making increasingly frequent contacts with the Indians. On October 29 Clark wrote, "After brackfust, we were visited by the old cheaf of the Big Belliesthis man was old and had transfired his power to his sun." On the same day members of the crew narrowly escaped a prairie fire which killed and burned several Indians.
On October 30 Lewis and Clark began to search for a wintering site near the villages of the Mandans, who agreed to provide the Expedition with corn as long as the supply held out. The site was found on November 2; it was four miles below the villages and six miles below Knife River, and was well supplied with wood for building houses and the fort. The location was "situated in a point of low ground, on the north side of the Missouri, covered with tall and heavy cottonwood." The next day the building began.
During the winter at Fort Mandan the captains frequently counselled with the Indians to obtain information about the country before them. Although the mercury occasionally dropped to minus 40°, adding frost bite to the men's problems, the crew prepared equipment and bargained for corn and foodstuffs for their trip up the river in the spring. During late winter the Expedition members made several dugout canoes from the large cottonwoods along the river bottom.
Several interpreters were hired. Among them was a half-breed named Charbonneau who requested that Sacagawea, one of his young wives, accompany him. Since she was a Shoshone Indian, the captains felt she would be valuable as an interpreter and agreed to the proposal.
During the latter part of March, the ice began to break up on the river. On April 5 the party began loading the boats and preparing to continue their journey to the coast.
While preparations were being made to continue upstream, the keelboat was loaded with plant and animal specimens, skins, skeletons, articles of Indian apparel, tobacco seed, and several cages of live animals. On April 7 the keelboat with 10 men left Fort Mandan and headed downstream for St. Louis. Only the best men were retained for the remainder of the journey. The unreliable had been weeded out and returned with the keelboat.
That same day the 33-person Expedition set out upstream from Fort Mandan aboard six canoes and two pirogues. On the morning of April 8 the Expedition passed the Knife River and on April 26 it reached the junction of the Missouri and Yellowstone Rivers. The next day it left what is now North Dakota.
The Expedition's return trip through North Dakota in August 1806 was a hurried one. The two captains who had separated west of the Continental Divide, joined forces at a point not far from Sanish (near New Town), now under the Garrison Reservoir. A stop was made at the Mandan villages north of Bismarck. Here the Mandan chief, Big White, joined the party to make a visit to the Great Father in the United States.
North Dakota possesses extensive natural resources which have great recreation potential but which are only partly developed. Contrasts in terrain abound. Bordering the State on the east is the Red River and the rich, rolling flatlands of its valley. The Missouri valley dominates the western half of the State. There the land has been deeply eroded to form the canyons and buttes of the Badlands.
Lewis and Clark followed the meandering Missouri River all the way across what is now the State of North Dakota, a distance of about 320 miles. Although much of the river has changed since 1804-06, the visitor to North Dakota may still view sites described by Lewis and Clark. The site of Fort Mandan where the Expedition spent the winter of 1804-05 is located about 60 miles north of Bismarck, and 14 miles west of Washburn. Fort Mandan was abandoned when the Expedition left for the west coast and the buildings were destroyed by the Sioux in 1805. The fort site itself has been washed away; however there is a marker on a 30 acre site showing the former location of the fort. The area is maintained by the State Historical Society. Indian village sites, such as Double Ditch, Big Hidatsa, and Big White and Black Cat, may also be visited.
Near the mouth of Knife River, north of Stanton, are the remains of several large Hidatsa villages associated with Lewis and Clark's Expedition. The so-called Big Hidatsa village has been declared eligible for Registered National Historic Landmark status by the Secretary of the Interior. It is believed that it was at this village that Sacagawea and her husband, Charbonneau, resided before joining the Expedition. Sketches of this village were made by the famous artists, Catlin and Bodmer, in 1832-1833. The Minoken Indian village site near Minoken has also been declared eligible for Registered Historic Landmark Status.
On the extreme western edge of the State, near the confluence of the Yellowstone and Missouri Rivers, lies Fort Union, one of the major historic sites in North Dakota. An important hub of the American fur trade from 1828 to 1866, the fort also was associated with Indian affairs, steamboat navigation, and military activity of that period. Fort Union Trading Post was approved by the Advisory Board on National Parks, Historic Sites, Buildings, and Monuments in October 1962 as having National significance and status as a National Historic Site. Development of this Post as a National Historic Site has been proposed by the National Park Service. Another nearby historic site, Fort Buford, already has been developed by the State Historical Society and is now a park.
Although one of North Dakota's nicknames is the "Sioux State," the first Indian tribes in the State were the Mandans, who have left clear evidences of their gradual retreat up the Missouri River for modern-day explorers to observe. Other Indian tribes in the State at the time of the Expedition were the Hidatsa, Absaroka, Cheyenne, Assiniboin, Sioux, Arikara, Amahami, Cree, Ojibwa, and Chippewa. The State was carved from the territory occupied by these tribes through a series of Indian wars and military expeditions. No less than 10 military posts were established in North Dakota.
Today, four Indian reservations remain in the State. Two reservationsStanding Rock and Fort Bertholdare found along the Lewis and Clark Trail.
Many Expedition campsites and significant historical locales are within the reservations. With proper financing, planning, and management, the contribution of these reservations to the development of the Lewis and Clark Trail could be invaluable.
For many years the Missouri was the main thoroughfare in this area. From time immemorial the Indians' canoes floated on it. For over 40 years after the Louisiana Purchase in 1803 the fur traders carried on a flourishing business along its waters, taking advantage of the seemingly endless herds of buffalo and the abundant beaver. Now the river has been widened by dams which have backed up its waters to form Garrison Reservoir and the northern portion of Oahe Reservoir.
Although many historic and archeologic sites have been inundated by the two reservoirs, a considerable number can still be found along the river, some of which pertain to the Lewis and Clark Expedition. The 60-mile stretch of river from just below Bismarck north to Garrison Dam is especially valuable because it will remain much as it was when the Expedition passed through the area in 1804 and 1806.
Other areas are drastically changed. Fort Berthold Indian Agency was relocated and three townsSanish, Van Hook, and Elbowoodswere destroyed by the construction of Garrison Dam. Archeologists and historians of the Smithsonian Institution, National Park Service, and the State Historical Society accelerated their program of excavating Indian villages, army forts, trading posts, and other historic sites before inundation was complete.
In their place Garrison Reservoir has provided all forms of water-based recreation activitiesfishing, swimming, water skiing, boating, campingin a region which had been almost devoid of such facilities. With 1,500 miles of shoreline and 390,000 acres of water surface Garrison Dam and Reservoir attracted 340,800 visitors in 1960. Total annual attendance in 1963 topped a half million visitors.
North Dakota still produces an abundance of wild game and fish. It is the Nation's largest and most important waterfowl producing area, and duck and goose hunting is a popular fall sport. Excellent upland game-bird hunting, including pheasants, wild turkeys, and sharp-tailed grouse, is found along the rolling hills and plains of the Missouri River. White-tailed and mule deer also offer considerable sport in the river bottoms and the rough western portion of the State. Garrison Reservoir, the State's largest body of water, offers a wide variety of game fish, including northern pike, walleyes, sauger, rainbow trout, channel catfish, ling, perch, and crappies.
Both Federal and State agencies have active fish and wildlife programs along the Trail. The Federal Bureau of Sport Fisheries and Wildlife operates the 13,500 acre Snake Creek National Wildlife Refuge on an eastern arm of Garrison Reservoir and the Garrison Dam National Fish Hatchery. New regulations by the Department of Interior will increase the recreation value of National wildlife areas. The regulations permit more public recreation use of wildlife refuges, fish hatcheries, and other Federal wildlife conservation areas when it is compatible with the primary purpose of the area. The extent to which various types of outdoor recreation can be enjoyed is defined in the new regulations, as well as limitations for the collection of scientific specimens and artifacts.
The North Dakota Game and Fish Department has conducted a continuous program of wildlife management and habitat development on numerous selected areas within the Garrison Reservoir takeline. Most of this work has been concentrated on the lower end of the reservoir. Additional wildlife management areas have been continually added in recent years, and are providing good habitat protection to increasing wildlife populations. These State wildlife management areas are helping to replace, in part, the tremendous wildlife habitat losses that occurred with inundation of the river bottom lands. The wildlife production areas serve as public hunting and fishing areas and their annual use, despite the early stage of development in most cases, continues to increase. The areas all are owned by the Corps of Engineers and are leased for wildlife management purposes to the Game and Fish Department.
The public lands along the Trail, under the administration of the Bureau of Land Management, are remnant parcelssmall in size, (approximately 40 acres) and are isolated from the major travel routes. The highest recreation use will be that of wildlife habitat and management. The Bureau's policy is to transfer to other land-managing agencies, such as the Game and Fish Department, those tracts having high public values. North Dakota is especially interested in the acquisition of the wetland tracts and transfers are presently being made to the State under the provisions of the Recreation and Public Purposes Act.
Other recreation opportunities are expanding. Picnicking, swimming, and horseback riding are favorites, and with the completion of Garrison Reservoir and other water development programs, water-based sports and camping increasingly are being enjoyed. Gem and mineral rock collectors can explore a wide variety of rock formations and deposits millions of years old.
The recreation facilities of the State are becoming a prime factor in the development of a healthy economy. More than 60 State parks and historic sites and numerous roadside parks offer day and overnight camping facilities.
An inventory of recreation, historic, and wildlife areas within about 25 miles of the Lewis and Clark Trail in North Dakota revealed some 77 existing and 22 proposed recreation sites. Forty sites provide water-based recreation and five additional water-based recreation areas are proposed for development by State and Federal agencies. The total area of land and water included within the 99 sites is over 72,000 acres. Facilities existing and to be developed provide opportunities for most forms of water-oriented recreation, and also for camping, picnicking, hunting, horseback riding, hiking, sightseeing, and nature study.
A detailed list of existing and proposed points of recreation and historic interest along the Trail, and pertinent information concerning each, are found in the tables on pages 140 to 144.
Use of outdoor recreation facilities has been expanding rapidly in North Dakota. Out-of-State tourist expenditures already a prime factor in North Dakota's economyhave paralleled recreation visitation. From $34,800,000 in 1951 they increased to $40,000,000 in 1955 according to a report by the National Association of Travel Organizations. The 1964 estimate is $659,000,000.
The 1960 census showed 632,446 people living in North Dakota of which 409,738 were in rural areas. From 1950 to 1960, however, urbanized areas grew 35 percent. The 1964 population was estimated at 638,200. By 1976 the population is expected to reach 695,000 and by the year 2000, 890,000.
Attendance at North Dakota's State parks has increased from 405,000 in 1958 to 597,150 in 1963. From 1958 to 1963 the annual days of recreation use on Bureau of Reclamation projects in the State increased from 118,700 to 553,000 and on Corps of Engineers projects from 382,000 to 920,000. The National Park Service estimates that North Dakota's recreation areas will receive a minimum of 2,100,000 recreation days of use by 1980. Although the attendance figures for recreation sites along the Lewis and Clark Trail have not been separately identified in available reports, use at sites along the Missouri River is increasing even more rapidly than for the State as a whole because of the reservoirs. Future demand for recreation areas along the Lewis and Clark Trail will depend on the interest aroused in the Trail and on the quality of the effort made to identify, mark, and develop areas along the route.
The major east-west traffic route intersecting the Lewis and Clark Trail at Bismarck is U.S. Highway 10, later to become Interstate 94. Traffic flow at present is greatest on this highway, with a daily volume of approximately 2,500 vehicles. U.S. 83, entering North Dakota from Pierre, South Dakota, is the primary existing highway north and south along the Missouri River between the United States border and Garrison Reservoir. The daily traffic volume along this highway averages about 1,100-1,300 vehicles per 24-hour period. U.S. 85, bisecting the Trail in the far western section of the State, carries a traffic volume of less than 1,000 vehicles per day.
With the completion of Interstate 94, a direct connection will be established between two of North Dakota's largest cities, Bismarck, population 30,600, and Fargo, population 46,662. Interstate 94 will also channel west-bound traffic from the Minneapolis-St. Paul area and east-bound traffic from Billings and other cities in Montana to the Trail. Interstate 29, the main north-south highway in the Upper Midwest, will carry traffic between southern population centers and from Fargo and Grand Forks to the border, where it will tie in with a Canadian route to Winnipeg. Considerable traffic is expected to turn west off of Interstate 29 onto Interstate 94, and follow it to Bismarck and the Lewis and Clark Trail area. In addition, consideration is being given to a possible Prairie-lands Parkway from Oklahoma to North Dakota where it would tie in with the Lewis and Clark Trail.
Traffic volume will increase materially with the completion of Interstate 94 and the Garrison and Oahe Reservoir perimeter road networks. Recreation areas along this route will, in turn, receive greater use if proper markers and directional information are provided the motorists. Properly developed water-based recreation areas near Interstate 94 and U.S. Highways 83 and 85 will be increasingly in demand in the future. Development of the recreation potential of the Missouri River Valley and Garrison Reservoir logically could meet this demand.
Reports published by the Outdoor Recreation Resources Review Commission reveal that water is a prime factor in most outdoor recreation activities, and that 44 per cent of the population prefers water-based recreation. In addition, water also enhances recreation on land. Choice camping sites and picnicking areas are usually those adjacent to or within sight of water and the touch of variety added by water enriches the pleasures of hiking or nature study. During the past 15 years, boating and water skiing have skyrocketed in popularity all over the country.
A 1961 study by the North Dakota Economic Development Commission reporting on recreation opportunities for small businesses in the State's vacation and recreation industry pointed out a universal lack of boating, swimming, and camping facilities for vacation travelers. Many recreation facilities did not provide for the State's internal demands, let alone the tourist industry.
Considering North Dakota's newly developed opportunities for water-based recreation, the improving highway facilities, and other trends augmented by an increasing population with higher mobility, income, and leisure time, outdoor recreation demands can be expected to grow significantly.
Much has been accomplished in North Dakota in the fields of historic and archeologic investigation and preservation along the Trail, but considerable emphasis now should be placed upon the development of natural and historic recreation areas to satisfy existing and future needs. North Dakota's most outstanding recreation resources are not yet despoiled and the opportunity for their care and improvement still exists. Preparation of these resources for public enjoyment will give a significant boost to the entire State's economy. Plans are underway in several areas.
Facilities at Garrison State Park at Riverdale soon will be expanded to include a modern motel. The area already has a boat dock concession, swimming beach, and numerous well-developed picnic, day, and weekend recreation facilities maintained by a permanent caretaker. The North Dakota Park Service also is undertaking planning and development along the Missouri River between Bismarck and Garrison Reservoir. On the shore of Oahe Reservoir in the Fort Rice area a sizable park is being planned to include the Huff Mandan Indian Village, a natural swimming beach, and the revamping of a railroad trestle to form a walking bridge.
Fort Abraham Lincoln, from which General Custer commenced his expeditions to the Black Hills and to the Battle of the Little Big Horn, is to be rebuilt and the fort gradually reconstructed. A luxury motel will be built there by private business. The North Dakota Park Service is also negotiating to lease a big island about three miles below the fort. Development here will provide a very extensive recreation area just south of Bismarck and Mandan. Long-range plans have been made for a large park north of Stanton which would embrace the sites of three Hidatsa villages.
A large potential recreation site, on the east side of Oahe Reservoir, is the Beaver Creek area. No attempt has yet been made to develop this area because road expansion is needed on the east side of Oahe Reservoir to provide adequate access for recreation.
The Aberdeen Regional Office of the Bureau of Indian Affairs has undertaken an ambitious long-range plan to guide the development of non-commercial and commercial recreation on the various reservations under its jurisdiction. The objective of this plan is to establish policy, principles, and procedures that will develop sound recreation planning to meet the needs of the various tribes and the recreation demands of the non-Indian. In establishing the regional recreation plan, the Bureau of Indian Affairs is stressing the advantages of tying the recreation aspects of the reservations to a collective regional recreation unit. The advantages of such a collective arrangement involve added purchasing power, advertisement, merchandising, promotion, and overall professional management of the various recreation complexes.
The Four Bears Park Development, on the Fort Berthold Indian Reservation, is well on its way to becoming an important recreation development on Garrison Reservoir. Several years ago the three affiliated tribes of the Fort Berthold Reservation obtained a license from the Corps of Engineers to operate Four Bears Park. In the spring of 1964 a private firm, under contract with the Bureau of Indian Affairs, completed an intensive survey of the Four Bears area and recommended a plan for the development of commercial and associated outdoor commercial recreation facilities. The report clearly indicated that the recreation market for the park was largely potential and that facilities must precede development of a market. Further, it will be many years before such facilities can become self-sustaining.
Development of Four Bears Park has accordingly been very slow. In the past 18 months a museum has been completed. Accelerated Public Works Projects also have assisted in certain physical improvements and the recent Dakota Cup hydroplane races have focused attention on the park. The park now offers tourists a marina, boat and motor rentals, fishing equipment, complete overnight camping facilities, summer rodeos, three major Indian celebrations, and reconstructed Indian lodges. However, financial assistance will be needed for capital improvements and operating costs at what ever scale the park is operated.
The basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Expedition route is in the Recommended Program, page 20. Recommended routing of a Lewis and Clark Highway in North Dakota is indicated on maps 9-12. Specific recommendations relating only to North Dakota follow:
1. Plans of the Corps of Engineering for public-use recreation areas around the Oahe and Garrison Reservoirs should be implemented as soon as possible, with priority given to those sites associated with the Lewis and Clark Expedition.
2. The regional recreation plan for the Standing Rock and Fort Berthold Indian Reservations prepared by the Bureau of Indian Affairs should include recreation, historic, and archeologic sites which can be identified with the Lewis and Clark Expedition route.
3. The State of North Dakota should implement the recommendations contained in the report on the vacation and recreation industry prepared by the North Dakota State University.
4. The Four Bears Park development should be continued and maintained.
5. The already existing Lewis and Clark Trail Advisory Committee should form the nucleus of the State Lewis and Clark Trail Committee and should undertake the development of an educational program for the Lewis and Clark Trail in North Dakota.
The Lewis and Clark Expedition explored more of what is now the State of Montana than of any other State along the route. Consequently, Montana has the most trail routes, campsites, and other sites of historic significance related to the Expedition. Including both water and overland travel, the Expedition explorations covered approximately 1,940 miles within the State.
Visitors to Montana can see a number of sites directly associated with the Lewis and Clark Expedition. These include the Great Falls of the Missouri River, Gates of the Mountains, Three Forks of the Missouri, Beaverhead Rock, Rattlesnake Rock, Fortunate Camp, Lemhi Pass, Lost Trail Pass, Travellers Rest, Lolo Hot Springs, Lolo Pass, and Pompeys Pillar.
Montana is already well known as a vacation State. In addition to the major attractions of Glacier and Yellowstone National Parks, there are an excellent State Park system and many national forests, Indian reservations, historic sites, ghost mining camps, large reservoirs on the Missouri River, and good hunting and fishing, all of which provide recreation opportunities.
The Lewis and Clark Trail in Montana passes through or very near the cities of Great Falls, Helena, Missoula, Bozeman, and Billings. However, because of the rugged nature of the terrain along the various routes used by the Expedition in this State, portions of these routes are well removed from today's major highways.
A considerable section of the Expedition route along the Missouri River has been flooded by Federal and private power company water control developments. Most of the existing and many of the proposed recreation areas associated with the Trail are found along the shorelines of these reservoirs.
The Federal and State agencies with administrative responsibilities for land along the Trail have developed some recreation areas, but much remains to be done, especially at the local level, in order to prepare for the demand which is expected to develop along the Lewis and Clark Trail.
There are 66 existing and 87 proposed points of recreation interest within about 25 miles of the Lewis and Clark Trail in Montana. Some 39 areas provide water based recreation and 66 additional water based recreation areas are proposed for development by State and Federal agencies along the river. The total area of land and water included within the 153 sites is over 554,000 acres. Facilities presently existing and proposed provide all forms of water-oriented recreation as well as camping, picnicking, hunting, hiking, horseback riding, sightseeing, nature study, winter sports, and other activities.
One of the most promising possibilities along the Trail is the Lewis and Clark National Wilderness Waterway as proposed by the National Park Service. The area to be included would extend approximately 180 miles from Fort Benton downstream along the Missouri River to the upper end of the Fort Peck Reservoir. This part of the Missouri is the only large section which remains much the same as it was when the Lewis and Clark Expedition passed through.
The National Park Service proposal was included in a joint study by the Department of the Interior and the Corps of Engineers of the reach of the Missouri River between Fort Peck Reservoir and Morony Dam, located about 30 miles upstream from Fort Benton. The joint study produced 11 alternative plans for use or development of the river. Only three of these plans include the Wilderness Waterway proposal in whole or in part.
To memorialize the Lewis and Clark Trail and to aid in meeting Montana's future recreation requirements the State should follow the basic development program as outlined in the Recommended Program. In addition, every effort should be made to establish the Lewis and Clark Wilderness Waterway as proposed by the National Park Service.
The recommended routing of a Lewis and Clark Highway in Montana is indicated on maps 12-19 and 24-25.
Montana was the scene of many of the most significant incidents concerned with the Lewis and Clark Expedition's eventual success as well as the setting for many of its more interesting adventures. It was in Montana that tragedy threatened more often than anywhere else along the Trail. This State was the real beginning of the great unknown as far as white man was concerned. The Expedition explored unknown routes of travel and discovered strange new plants and animals. The experiences and hardships endured were a constant challenge to the ingenuity and resourcefulness of the two leaders and to the courage and hardiness of all members.
It was April 27, 1805, when the Expedition left the junction of the Yellowstone and Missouri Rivers to cross into present-day Montana. Near the Judith River Captain Lewis climbed a bluff and had the first view of the Rocky Mountains. They also had a narrow escape from a buffalo bull stampeding through their camp. When they reached the mouth of the Marias River the party camped for a week while deciding which river was the Missouri.
Above the Marias River the Expedition encountered the Great Falls of the Missouri River which caused the longest, most arduous, and time consuming portage of the entire journey. It was near the Great Falls that Clark, York, Charbonneau, Sacagawea, and the baby "Pomp" were almost drowned in a small ravine by a sudden flash flood.
Late in July the party reached the area where the three forks of the Missouri unite to form the main stem. They named these forks the Jefferson, Madison, and Gallatin Rivers after the President and the Secretaries of State and the Treasury.
They followed first the Jefferson and then its tributary, today's Beaverhead River, upstream to where the latter river forks. Here, at what is now Clark Canyon Dam, they remained for a week while trading with the Shoshone Indians for the horses needed to haul their equipment across the mountains.
Leaving the boats cached at this camp, the Expedition struck out for the first time on land. The main party, under Captain Lewis, left Montana behind as they crossed the Continental Divide at Lemhi Pass on August 26. Meanwhile, Captain Clark, who had gone ahead to explore a route down the Salmon River, discovered that the Salmon was a river of no return, just as it is today.
Since a water route to the Columbia River was not practical from this point the leaders decided to continue traveling overland by following a route used by the Indians. Hiring an Indian for a guide, the Expedition turned north, back into Montana again, as they headed for Lolo Pass and the Lolo Trail.
The Expedition reentered Montana on September 4 near Lost Trail Pass. That day they met and camped with a band of Flathead Indians at an area now known as Ross' Hole. They traded some horses with these Indians who were heading for the buffalo country along the Missouri. This meeting with the Flatheads is the subject of the largest painting by Charles M. Russell, which hangs in the State Capitol in Helena.
The Expedition traveled down the Bitterroot Valley to a camp that they called Travellers Rest, located at the confluence of the Bitterroot River and Lolo Creek. Here they turned west up Lolo Creek and crossed Lolo Pass into Idaho on September 13, 1805. The Expedition's Indian guide led them through this area without too much trouble, although he missed the route for a while near Lolo Hot Springs.
On the return trip the following year, the Expedition crossed Lolo Pass and entered Montana again on June 29, 1806. At Lolo the party split into two groups. The purpose was to permit exploration of two different routes used by the Indians to cross the Continental Divide. They hoped to find a more direct route between the Missouri and Columbia River drainages than the one they had followed going west.
Captain Lewis led a small party down the Bitterroot River to its confluence with the Clark Fork River. Turning upstream, he followed first Clark Fork and then the Blackfoot River to a pass, now called Lewis and Clark Pass, on the Continental Divide. From here the route led north to the Sun River and then down that river to the upper end of the Great Falls of the Missouri. At this point Lewis left six men to wait for Clark's river party and help them haul the Expedition's boats across the portage at the falls. This group would then proceed down the Missouri as far as the mouth of the Marias River where they were to wait for Lewis.
Taking three men, Lewis explored the upper reaches of the Marias River nearly to present-day Browning. The reason for this side trip, as stated by Lewis in his journal, was to "ascertain whether any branch of that river (Marias) lies as far north as latitude 50." Actually, Lewis was trying to determine if the upper reaches of the Marias River approached closely enough to the south branch of the Saskatchewan River to permit an easy portage between the Saskatchewan and Missouri River drainages. An encounter with a wandering party of eight Blackfeet Indians, which almost ended disastrously for Lewis, caused him to return hastily to the Missouri. There he met the men bringing the boats down the Missouri toward the Marias River mouth. As the reunited parties traveled on down the Missouri River to a rendezvous with Clark and the remainder of the Expedition, they left Montana on August 7, 1806.
Captain Clark, meanwhile, had gone south, up the Bitterroot Valley, along the same general route followed the year before on the trip west. At Sula he turned off the outbound route to follow an Indian trail across the Continental Divide at Gibbons Pass. After crossing the pass, Clark's party entered the Big Hole River valley and then turned south again to the upper end of the valley. From there it was only a short distance across to Fortunate Camp where the Expedition's boats were cached.
Clark accompanied the boats downstream from Fortunate Camp as far as the Three Forks of the Missouri. There he divided his party, sending some of the men with the boats down the Missouri under the command of Sergeant John Ordway, while he took eight men, the Charbonneau family, and all the horses and went up the Gallatin Valley and across to the Yellowstone River. From here their route followed the river downstream toward the scheduled rendezvous with the remainder of the Expedition at the confluence of the Yellowstone and Missouri Rivers.
Clark's party traveled overland along the Yellowstone on horseback for several days before finding trees large enough to make dugout canoes. It then required four days to construct two canoes. During this period Indians stole half of their horses. When the canoes were finished, Clark divided his group once more, detailing Sergeant Pryor and three men to take the remaining horses on to the Mandan Indian Village on the Missouri, while he and the other members of the party proceeded down the Yellowstone in the new boats. Clark and his group stopped to explore a large rock near the river east of present-day Billings. Clark carved his name on the rock and named it Pompeys Tower (now Pompeys Pillar) after Sacagawea's son, whom he called Pomp. His name on the rock is still visible today.
Traveling effortlessly down the Yellowstone, Clark's small group left Montana on August 2, 1806, the day before they reached the Missouri River once again. Meanwhile, Sergeant Pryor and his three horse wranglers were having trouble. The second night out the Indians stole the remainder of the horses. Not wanting to walk all the way, the four men built two boats out of buffalo hides like the ones the Mandan Indians used. With these impromptu craft they caught up with Clark after he reached the Missouri. Finally, on August 12, the entire Expedition was reunited. Their travels through what is now Montana had consumed almost six months of westbound and eastbound explorations.
The Expedition came closest to disaster and failure in Montana. On the outward trip a sudden squall on the Missouri River tipped their large boat on its side. This occurred in the area now inundated by Fort Peck Reservoir. Had it not been quickly righted, the boat would have been lost, together with most of the Expedition's important instruments and equipment. Both leaders were safely on shore, but Charbonneau and Sacagawea were on board. The loss of the boat, instruments, and interpreters could have been a disaster great enough to turn back the Expedition.
There were narrow escapes from grizzly bears, rattlesnakes, flash floods, falling trees, hail storms, and a stampeding buffalo. It was soon after entering Montana, downriver from the present Fort Peck Dam, that the party met its first grizzly bear. These ferocious animals became a constant threat from which the leaders and their men had many narrow escapes.
The Missouri up to this time had been easy to follow. However, in Montana there were at least two placesone at the Marias River and the other at the Three Forks of the Missouriwhere an incorrect decision as to which river to follow could have resulted in failure for the Expedition. At Three Forks the party was confronted with the problem of determining which riverJefferson, Madison, or Gallatinwould lead them to the divide separating the Columbia and Missouri drainages. The leaders made the proper decision, selecting the southwest fork which was the Jefferson.
The contributions of the Expedition to the State of Montana were many and varied. Rivers and major streams that they passed, as well as other features of the terrain, often were named after members of the Expedition. Some of these names still survive. Judith River and Marias River, named by Clark and Lewis respectively for girls they had left behind, and the three forks of the Missouri, named for President Jefferson, James Madison, and Albert Gallatin, are still called by the same names.
But the Expedition's effect on the State went far beyond the naming of topographic features. The publicity given to this area by the Expedition was responsible for some of the early migration to the State. Montana's first industry, fur trapping, was encouraged by the report brought back by the Expedition. In fact, one member of the group, Colter, turned back to go trapping before the main group returned to St. Louis.
Because the parties separated on the return trip to explore the Marias and Yellowstone Rivers, the Expedition explored more of what is now the State of Montana than any of the other States along the route. Including both water and overland travel, the Expedition covered approximately 1,940 miles within the State.
Because of the Trail mileage involved, there are more existing and proposed recreation areas than in any of the other nine States153 sites, 66 existing and 87 proposed. Some 39 areas provide water-based recreation and 66 additional water-based recreation areas are proposed for development by State and Federal agencies near the Trail. The total area of land and water included within the 153 sites is over 554,000 acres. The facilities, existing and to be developed, are to provide opportunities for all forms of water-oriented recreation, as well as for camping, picnicking, hunting, hiking, horseback riding, sightseeing, nature study, winter sports, and many other activities. The tables on pages 144 to 158 and 168 to 170 list all the above sites and pertinent data concerning each area.
It is possible to visit several sites in Montana which relate directly to the Lewis and Clark Expedition. These include the Great Falls of the Missouri River, Gates of the Mountains, Three Forks of the Missouri, Beaverhead Rock, Rattlesnake Rock, Fortunate Camp, Lemhi Pass, Lost Trail Pass, Travellers Rest, Lolo Hot Springs, Lolo Pass, and Pompeys Pillar. Some of these sites, such as the Great Falls of the Missouri, Gates of the Mountains, Rattlesnake Rock, and Fortunate Camp, have been altered by the construction of dams and reservoirs. However, all are still easily recognized and signs or monuments mark the location of most of them. The Three Forks of the Missouri are now commemorated by being included in a State Park area. Others, such as the spring at Lemhi Pass, which are located on National Forest lands have been set aside by the Forest Service and suitable signs erected. Two of the sites, Travellers Rest and Pompeys Pillar, should be acquired and dedicated to recreation and historic use while another, Beaverhead Rock, which is on public domain land, should be established or marked in a suitable manner.
The Federal agencies having control of lands over which the Expedition traveled include the Bureau of Land Management, Bureau of Indian Affairs, Bureau of Reclamation, Bureau of Sport Fisheries and Wildlife, Corps of Engineers, and the Forest Service. In addition, the National Park Service has proposed the establishment of the Missouri River between Fort Peck and Fort Benton as the Lewis and Clark National Wilderness Waterway. The Waterway would preserve 180 miles of the river which remain much as they were when first explored by Lewis and Clark.
The Secretary of the Interior has significantly honored the Lewis and Clark Expedition in Montana by certifying four associated historic features as Registered National Historic Landmarks. These are Lolo Trail and Lemhi Pass (U.S. Forest Service), Travellers Rest (privately owned), and Three Forks of the Missouri State Park. In addition, Pompeys Pillar (privately owned), has been declared eligible for Registered National Historic Landmark status.
Montana's State Park Division and Fish and Game Department administer several areas along the Trail. The State Highway Commission has erected signs at important historic sites all across the State, some of which relate to the Lewis and Clark Expedition.
There is only one county park area along the Trail in Montana. Two cities report parks along the Expedition route and, near another city, service clubs have cooperated in the development of two roadside parks. The Montana Power Company has developed recreation areas on its reservoirs on the Missouri River.
The Smithsonian Institution conducted an archeologic appraisal of the Missouri Breaks region of Montana in 1962, including a strip extending 160 miles downstream from Fort Benton to Armell Creek. This appraisal indicated the need for more detailed studies of the several sites that were discovered to increase the knowledge of prehistoric inhabitants of the area. Although archeologic investigations of the Fort Peck Reservoir area were not made before it was flooded, an archeologic shoreline survey by Montana State University recently was completed, as well as surveys at other sites along the Missouri River.
Although the Trail passes through or very near the present cities of Great Falls, Helena, Missoula, Bozeman, and Billings, through much of the State the Expedition route itself is well removed from existing major highways and large urban areas.
To facilitate the discussion of recreation resources and needs along the Lewis and Clark Trail, with particular emphasis on existing and proposed recreation sites, the westward Trail across Montana is treated in 11 sections, which follow:
NORTH DAKOTA BORDER TO FORT PECK DAM
From the North Dakota border to Fort Peck dam the Missouri river is paralleled by U.S. Highway 2. Access to and across the river is provided in this 125-mile stretch at five places, one by ferry, three by bridges, and the other by Fort Peck Dam.
The area along the north side of the Missouri River between Big Muddy River and Milk River, a distance of about 75 miles, is within the Fort Peck Indian Reservation. Land south of the river is administered by the Bureau of Land Management. The Bureau has identified one potential recreation site north of the river and east of the reservation.
One city park, the Lewis and Clark Memorial Park, is located on the north side of the river near Wolf Point. A recreation area administered by the Corps of Engineers is just below Fort Peck Dam. No Lewis and Clark campsites or areas of historic interest in connection with the Expedition have been marked.
FORT PECK DAM TO UPPER END OF FORT PECK RESERVOIR
The Expedition route from Fort Peck Dam to the upper end of Fort Peck Reservoir is now under the waters of that Corps of Engineers project. Fort Peck Dam, completed in 1940, forms a 245,000-acre reservoir 189 miles long with 1,600 miles of shore line. The total project area, which includes the shore lands acquired for the project as well as the reservoir, is 590,084 acres; nearly all of which is located within the 951,000-acre Charles M. Russell National Wildlife Range, administered by the Bureau of Sport Fisheries and Wildlife.
This section includes three State parks, four Corps of Engineers recreation areas, seven potential Corps recreation areas, and four potential Bureau of Sport Fisheries and Wildlife recreation areas, all located on the reservoir. In addition, there are five existing and proposed wildlife areas and range conservation areas.
Road access to the reservoir is not adequate at the present time. State Highway 24 crosses the dam and U.S. Highway 191 crosses on Robinson Bridge at the upper end of the reservoir. In between, for about 190 miles, there are no crossings. A few roads provide access to the reservoir on both sides and a few more are planned. However, there is no road closely paralleling the reservoir on either side. U.S. Highway 2 is far to the north.
UPPER END OF FORT PECK RESERVOIR TO FORT BENTON
Much of this 140-mile stretch of the Missouri from the upper end of the Fort Peck Reservoir to Fort Benton is still accessible only by boat. In the 100-mile stretch of river between the Charles M. Russell National Wildlife Range and Virgelle, there are no bridges and only three ferries. A few unimproved roads give access to the river or to the river bluffs. From Virgelle upstream to Fort Benton, a distance of 40 miles, U.S. Highway 87 passes some distance to the northwest. Ferries at Virgelle and Loma and a bridge at Fort Benton provide access to and across the river.
There are extensive tracts of public domain lands along this section of the Expedition route, especially in the 50-mile stretch from the Judith River downstream to the wildlife range. No recreation areas exist near the river in this section.
A major potential recreation development is the National Park Service's proposed Lewis and Clark National Wilderness Waterway. The heart of the Waterway would extend downstream about 100 miles from the vicinity of Virgelle to the Charles M. Russell National Wildlife Range. Also proposed for inclusion in the Waterway, but remaining under present ownership and administration, would be a 39-mile river section of the wildlife range extending from its western boundary downstream to the upper end of the Fort Peck Reservoir, and a 42-mile stretch of river extending upstream from Virgelle to Fort Benton. The lands bordering this latter section are almost entirely privately owned.
The proposed Wilderness Waterway not only remains much the same as it was when Lewis and Clark were there, but is the most scenic section of the river and contains many important historic, archeologic, and geologic sites and areas as well.
The proposal was included in a joint study by the Department of the Interior and the Corps of Engineers of the reach of the Missouri River between the upper end of Fort Peck Reservoir and Morony Dam, located about 30 miles upstream from Fort Benton. The study produced 11 alternative plans for use or development of the river. Plan No. 6 would include the Wilderness Waterway proposal, a public land management program proposed by the Bureau of Land Management, the fish and wildlife program proposed by the Bureau of Sport Fisheries and Wildlife, and the Fort Benton Dam and Reservoir proposed by the Corps of Engineers.
Between U.S. Highway 2 and the Missouri River there are three areas, including Chief Joseph Battleground State Monument; Bearpaw Lake Fishing Access Site, administered by the State Fish and Game Department; and Beaver Creek County Park, administered by Hill County.
FORT BENTON TO HOLTER DAM
The Great Falls of the Missouri River that cost the Expedition so much time and effort are the central feature of the Trail between Fort Benton and Holter Dam. The river is paralleled by 41 miles of U.S. Highway 87 from Fort Benton to Great Falls, and by 58 miles of U.S. Highway 91 from Great Falls to Holter Dam. U.S. Highway 87 stays well to the northwest of the river until it crosses at Great Falls. Access to the river in this stretch is quite limited. A ferry crosses about 15 miles up the river from Fort Benton and a few unimproved roads exist down to the river, but the first bridge is at Great Falls. In the 12-mile stretch of river below the city of Great Falls there are now four power dams operated by the Montana Power Company.
From Great Falls to Holter Dam, U.S. Highway 91 diverges from the river until it reaches the town of Ulm, 11 miles upstream. From there, it follows the river and the Trail quite closely as far as Holter Dam.
The Montana Power Company has established a popular recreation area on an island below Ryan Dam on the Missouri. The city of Great Falls has developed a picnic area at Giant Springs, along the river downstream from the city. The State Fish and Game Department administers a fish hatchery also located at Giant Springs.
Upstream from Great Falls, near Craig, the Bureau of Land Management has identified a small piece of public domain land as a potential recreation area.
HOLTER DAM TO THREE FORKS
The lower two-thirds of the river from Holter Dam to Three Forks has been inundated by Holter, Hauser, and Canyon Ferry reservoirs. Of these, Holter and Hauser were constructed by the Montana Power Company, while Canyon Ferry, the one farthest upstream, is a Bureau of Reclamation facility.
Holter Dam backs water 26-1/2 miles up the Missouri to Hauser Dam, forming a reservoir with 4,800 surface acres. Hauser Dam backs water up the Missouri 16-1/2 miles to Canyon Ferry Dam and also forms Lake Helena in Helena Valley. Hauser Reservoir and Lake Helena together have a surface area of 6,000 acres. The 35,200-acre Canyon Ferry Reservoir stretches 25 miles to Townsend.
Helena, the capital of the State, is about 18 miles west by road from Canyon Ferry Dam. U.S. Highway 91 (Interstate 15) crosses the river below Holter Dam, then stays well to the west of the river as it continues south 34 miles to Helena. U.S. Highway 12 and State Highway 287 go 32 miles east and south from Helena to cross the Missouri just above the upper end of Canyon Ferry Reservoir. At Townsend, U.S. 12 heads east while State Highway 287 goes south along the east side of the Missouri for 11 miles, crossing at Toston; State Highway 287 then continues south, staying well west of the river, for 21 miles to a junction with U.S. 10 (Interstate 90) near Three Forks. Except for the short stretch between Townsend and Toston, the major highways in this section are distant from the Expedition route; however, there is access to and across Hauser Lake and Canyon Ferry Dam.
The Montana State Park Department administers the Missouri River Headwaters State Monument at Three Forks, and the Canyon Ferry Recreation Area, comprised of the water surface and shorelands of Canyon Ferry Reservoir. Six recreation sites have been developed by the State along the reservoir.
The Bureau of Land Management administers scattered tracts of public domain lands along the river and around the reservoirs. The Bureau identified one potential recreation site on Holter Lake, four on Canyon Ferry Reservoir, and 10 along the river between Townsend and Three Forks.
The Montana Power Company has developed a picnic area on the shore of Holter Reservoir near the dam. The Forest Service has developed Meriwether Picnic area on this reservoir at Gates of the Mountains.
A portion of Helena National Forest borders the east shore of Holter Reservoir from Gates of the Mountains upstream to Hauser Dam. The National Forest also borders two small stretches of the east shoreline of Hauser Reservoir, one near Hauser Dam and the other just below Canyon Ferry Dam. Gates of the Mountain Wilderness is located in Helena National Forest not far from the Expedition route.
THREE FORKS TO LEMHI PASS
The route from Three Forks to Lemhi Pass includes the location of Fortunate Camp and the Montana side of Lemhi Pass. Major highways parallel the water route along this section of the Trail. From Clark Canyon Dam the Expedition route is paralleled by State Secondary Highway 324 to the top of Lemhi Pass.
From Three Forks to Whitehall, a distance of 30 miles, U.S. Highway 10 (Interstate 90) is never far from the Jefferson River. Near Whitehall, the river comes in from the southwest, and U.S. 10 is left behind. State Highway 287 next follows the river upstream for 12 miles to a junction with State Highway 41. State 41 takes over for 14 miles along the Jefferson, then 28 miles along the Beaverhead River to a junction with U.S. Highway 91 (Interstate 15) at Dillon. U.S. 91 then follows the Beaverhead River as far up as the Bureau of Reclamation's new Clark Canyon Dam.
The Bureau of Land Management has identified seven potential recreation areas along the lower 40 miles of the Jefferson, a potential historic site on Beaverhead Rock, and a potential recreation area along the county road leading to Lemhi Pass.
Lewis and Clark Caverns State Park, 10 miles from Three Forks, takes its name from the Expedition's leaders, even though the cave was discovered about 100 years later. A new State park has been established on the shores of the recently completed Clark Canyon Reservoir. Just before the Expedition route reaches Lemhi Pass, at an elevation of 8,000 feet, it enters Beaverhead National Forest. The Forest Service has developed the Sacagawea Memorial Area on the Montana side of the pass, including the spring that Lewis and Clark believed to be the source of the Missouri River. A small picnic area and a wildflower trail are to be found there.
LOST TRAIL PASS TO LOLO PASS
Paved highways now follow the route of the Expedition's reentry into Montana near Lost Creek Pass, its travel down the Bitterroot Valley to Lolo, and its route up Lolo Creek to Lolo Pass. U.S. Highway 93 crosses Lost Trail Pass into Montana, and then follows the Bitterroot Valley downstream to Missoula, crossing Lolo Creek near Lolo. At Lolo, U.S. Highway 12 leads west to Lolo Pass.
The first portion of the Trail from Lost Trail Pass down to the East Fork of the Bitterroot River crosses Bitterroot National Forest lands. The Forest Service has built a new visitor center at Lost Trail Pass and has plans for a campground near there. There is a Forest Service ski area at Lost Trail Pass; another is planned for the area. There are five campgrounds along or near the Expedition route within the forest boundary. One of these campgrounds is located on Lake Como, a Bureau of Reclamation reservoir situated in the foothills of the Bitterroot Mountains south and west of Hamilton.
From the East Fork of the Bitterroot River down the valley to Lolo, the route is on private land. There are no nearby public domain lands in the Bitterroot Valley. Going west from Lolo, the Expedition route crossed what is now a piece of public domain land about four miles up Lolo Creek. Here the Bureau of Land Management has identified a potential recreation area.
Continuing west, the Expedition route lies in Lolo National Forest to the top of the pass. Lolo National Forest has developed two campgrounds near Lolo Hot Springs, and has plans for another near the Expedition route.
Fort Owen State Monument, near Stevensville, is the only State park facility along this section of the Trail. It commemorates a trading post and fort, built in 1850.
The Lions Club of Hamilton has created a small roadside park, Durland Park, about 10 miles south of the town. The Lions Club and the Chamber of Commerce in Hamilton cooperated in the development of a similar area, Blodgett Park, which is about four miles north of town.
LOLO TO GREAT FALLS
The first portion of Lewis' separate return route from Lolo to Great Falls has been opened by a good system of Federal, State, and county roads except for the portion across Lewis and Clark Pass. The pass, located about five miles north of the point at which State Highway 20 crosses the Continental Divide at Rogers Pass, can be reached by 10 miles of dirt road and two miles of jeep trail.
At the junction of the Bitterroot and Clark Fork Rivers, the return route passes through Missoula, the third largest city in Montana. The eastern end of this section of the route is near Great Falls, the State's largest city.
The Trail through this section goes for the most part through private land. Along the Blackfoot River west of the Continental Divide, however, the Trail crosses several tracts of public domain lands and some small parcels of Helena National Forest lands. As the Trail approaches Lewis and Clark Pass it crosses about five or six miles of Helena National Forest lands just before reaching the summit. East of the Divide the Trail does not cross public domain or national forest lands.
Of the 16 existing or potential recreation, historic, or wildlife areas located along or near the Trail between Lolo and Great Falls, only five are east of the Divide. These include the Sun River Game Range, the Freezeout Lake Area, and the Bean Lake Fishing Access Area, all administered by the State Fish and Game Department, and the Bureau of Sport Fisheries and Wildlife's Pishkun and Willow Creek National Wildlife Refuges.
The Forest Service has earmarked a potential historic site at Lewis and Clark Pass. At the present time, there is nothing at this historic spot but a rustic sign identifying the pass. West of the pass, near Lincoln, there are two Forest Service campgrounds and a State park. In this same general vicinity, five potential recreation sites have been recognized by the Bureau of Land Management on lands under its jurisdiction. A few miles farther west are two areas administered by the State Fish and Game Department. One of these is the 52,000-acre Blackfoot-Clearwater Game Range while the other is a small site providing fishing access to Upsata Lake.
MARIAS RIVER LOOP
Lewis' side trip to explore the upper reaches of the Marias River is a relatively undeveloped section of the Trail. No major highways parallel the Expedition route. Between Great Falls and the Marias it is possible to follow its general direction by using county roads, but none closely follows the route upstream along the Marias and Cut Bank Rivers.
Access has been provided to the Bureau of Reclamation's Tiber Dam and Reservoir located on the Marias River. Above the reservoir the route cuts across Interstate Highway 15 and U.S. Highway 2; the return portion of the route also intersects these highways. The principal towns along or near the route are Shelby, Cut Bank, and Browning.
The majority of the lands along this section of the Trail are in private ownership. The only Federal lands involved are those which border Tiber Reservoir and a scattering of public domain lands along the Marias River upstream from the reservoir.
There is only one recreation area in this sectionthe Tiber Reservoir Recreation Area, administered by the State Park Department. On public domain lands along the Marias upstream from the reservoir, the Bureau of Land Management has identified 28 small sites to give access to the river for fishing, boating, and camping. These sites have been shown on the map as one area. Farther upstream are two other Bureau of Land Management potential recreation sites.
West of Cut Bank the Great Northern Railroad has erected a monument marking the farthest point north reached by members of the Expedition.
SULA TO CLARK CANYON DAM
Clark's separate return route from the time he turned off of the outbound route at Sula until he returned to it at Fortunate Camp can be followed on State and county roads, varying in type from paved to gravel to dirt. Most of the lands along this section of the Trail are in private ownership. On both sides of the Continental Divide, however, the Expedition route is on lands of the Bitterroot and Beaverhead National Forests, and on the Divide between Grasshopper and Horse Prairie Creeks it crosses a large tract of public domain land.
The State Park Department has developed Bannack State Monument to commemorate the site of the first major gold discovery in Montana and the establishment, in 1862, of Bannack as the first capital of the Montana Territory. The Big Hole National Battlefield, administered by the National Park Service, is located 12 miles west of the town of Wisdom. There are no other parks or recreation areas along this section of the Trail. The Forest Service has located two potential campground areas along Trail Creek on the east side of the Divide in Beaverhead National Forest.
A major portion of Clark's separate return route where he turned off the outbound route to explore the Yellowstone River is paralleled by Interstate Highways. From Three Forks east to Billings, a distance of about 175 miles, Interstate 90 follows the Expedition route. Interstate 94 parallels the 220-mile portion from Billings east to Glendive. Along the remaining 65 miles between Glendive and the North Dakota border, State Highway 16 and then State Highway 20 follow the river route.
Several of Montana's larger cities are located along this section of the Trail. They include Billings, the State's second largest city, with a population in 1960 of 52,851; and Bozeman, Miles City, Livingston, and Glendive, which were in the 7,000-9,000 population range that same year.
Except for a sprinkling of public domain lands along some stretches of the Yellowstone River, almost all of the lands in this section of the Trail are privately owned. However, the Bureau of Land Management has identified 14 potential recreation sites on the public domain lands under its administration. Forest Service lands border the Trail quite closely near Bozeman and Livingston. Bridger Bowl Ski Area, located in the Gallatin National Forest north of Bozeman, is a popular winter sports area.
At Bozeman, the Bureau of Sport Fisheries and Wildlife administers a national fish hatchery. Southwest of Billings within the Crow Indian Reservation the National Park Service administers the Custer Battlefield National Monument.
Makoshika State Park near Glendive is the only State park in this section. The Montana Fish and Game Department administers four small areas along the Trail which provide opportunities for fishing and picnicking.
Specific data necessary to make accurate projections of demands for facilities along the Lewis and Clark Trail are not yet available. Demand for areas along such a trail can be expected to reflect the interest aroused in the project and the quality of effort made to identify, mark, and develop visitor use areas.
Some indication of potential demand can be derived from population and travel trends. The 1960 census listed 674,767 inhabitants for the State of Montana, representing an increase of 14.2 percent over the 1950 census. Population projections indicate that by the year 2000, the population of Montana will increase to 1,397,000, or a little more than twice the present number. Undoubtedly such an increase will more than double the present demand for recreation areas and facilities in the State. To this should be added the pressure of the expected increased out-of State travel.
Montana is justly famous as a vacation State. Glacier and Yellowstone National Parks are prime vacation targets. The many national forests with their extensive recreation developments, the excellent State Park system, and the major reservoirs such as Fort Peck and Canyon Ferry on the Missouri River, all combine to attract tourists to the State.
Many of these people will travel highways which now parallel portions of Lewis and Clark's Expedition route. If the route is marked in a suitable manner, and if a Lewis and Clark Trail is established by using existing highways and later adding new roads where needed, there is no question that such a highway would be a popular route in Montana. This would result in a greatly increased demand for recreation sites along the route.
The proposed Lewis and Clark Trail in Montana would be made up of a combination of several Federal, State, county, and Forest Service roads. In most instances, these roads follow the Lewis and Clark Expedition route quite closely. However, there is a 300-mile section of the Expedition route in eastern and central Montana, from Fort Peck Dam upstream to Virgelle, where the proposed Lewis and Clark Trail route is 75-90 miles north of the Expedition route along the river. One State highway and a few county roads provide access south to the river.
Interstate Highway System plans for Montana involve two east-west routes and one north-south route. Interstate 90 and 94 from the east join at Billings. Interstate 94 ends there but Interstate 90 continues on west, through Bozeman and Missoula, to Spokane and Seattle. Interstate 94 parallels most of Clark's return route on the Yellowstone downstream from Billings. Interstate 90 parallels Clark's route between Billings and Three Forks, a short stretch of the main Expedition route west of Three Forks, and a small portion of Lewis's return route near Missoula. Interstate 15, a north-south route from southern California to Canada, follows sections of the Trail along the Beaverhead and Missouri Rivers.
When these routes are completed, Interstate Highway traffic to and through Montana undoubtedly would be increased to a marked degree. The Montana State Highway Commission, for highway planning purposes, estimates that traffic flow on its primary highways will double in the next 20 years and that on the Interstate Highways it will triple.
On April 20, 1965 the Montana Fish and Game Commission declared the Yellowstone River from the Yellowstone National Park boundary to Pompeys Pillar as the Yellowstone State Waterway. Special emphasis will be given to the recreational development of this reach of the river.
The National Park Service has proposed the establishment of the Lewis and Clark National Wilderness Waterway under its administration along the Missouri River. The area relates directly to the Lewis and Clark Expedition and would extend from the upper end of the Fort Peck Reservoir to Fort Benton. This is not only one of the most scenic sections of the Missouri River but the only sizable portion which remains much the same as it was when the Lewis and Clark Expedition passed through. Alternate Plan No. 6, of the joint Army-Interior Study of the upper Missouri, provides for river access points, campgrounds, overlooks, interpretive points, and for the development of part of the hydroelectric resources available in this region by construction of the Fort Benton Dam and Reservoir.
The basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Trail is outlined in the Recommended Program, page 20. The suggested highways to be designated and marked as the Lewis and Clark Trail are shown on maps 12-19 and 24-25. Specific recommendations relating only to Montana follow:
1. Alternate Plan No. 6 of the joint Army Interior study of the upper Missouri, which provides for the establishment of the Lewis and Clark National Wilderness Waterway, should be implemented.
2 The Corps of Engineers and the Bureau of Sport Fisheries and Wildlife should develop the proposed recreation areas and wildlife conservation areas along the shoreline of Fort Peck Reservoir and within the boundaries of the Charles M. Russell National Wildlife Range. Access to existing areas on Fort Peck Reservoir should be improved.
3. The Bureau of Land Management should continue its program of identifying potential recreation and wildlife conservation areas on public domain land along the Expedition route and should arrange for the development and administration of the more important ones, either by the Bureau itself or by other public agencies, particularly at the State and local level.
4. The Forest Service should continue its program of identifying potential recreation areas on National Forest lands with particular emphasis on developing existing areas along the routes of the Expedition.
5. The Forest Service should improve its road across Lemhi Pass to the extent possible without destroying the natural setting of that important historic site.
6. The State of Montana should take the initiative in identifying and marking the Lewis and Clark campsites and other sites of historic importance in connection with the Expedition along the several routes followed in the State.
7. The Montana State Parks Division should develop a program for the acquisition, development, and administration of additional State park areas along the route and should work closely with the Bureau of Land Management, Corps of Engineers, Bureau of Reclamation, and the Bureau of Sport Fisheries and Wildlife in such a program in order to take advantage of opportunities for acquiring potential sites available from these Federal agencies.
8. The Montana Power Company should continue its reservoir recreation program and develop additional recreation areas on its existing reservoirs along the Lewis and Clark Trail.
The Lewis and Clark Expedition's route in Idaho was primarily overland. Most of the route is on Federal land within the National Forests and is still much the same today as when the Expedition passed through. Several sites which relate directly to the Lewis and Clark Expedition can be visited. These include Lolo Trail, Canoe Camp, Wieppe Prairie, Long Campsite, Lolo Pass, Lost Trail Pass, and Lemhi Pass.
U.S. Highway 12, which parallels the route of the Expedition from Lewiston east to the Montana border at Lolo Pass, has been designated by the State as the Lewis and Clark Highway.
Along the 210 miles of Expedition route in Idaho there are 21 existing and 30 proposed points of recreation interest. The total area of land and water included within these 51 sites is about 1,242,000 acres. Nearly 1,240,000 acres are included within the Selway-Bitterroot Wilderness.
A bill passed by the 88th Congress authorizes the Secretary of the Interior to designate certain lands in Idaho as the Nez Perce National Historical Park. This park will include several sites related to the Lewis and Clark Expedition.
In contrast to the situation which exists elsewhere along the Expedition route, only a very small section of the route has been flooded by a reservoir formed by a small Washington Water Power Company dam on the Clearwater River near Lewiston. The Corps of Engineers has begun the construction of a large dam and reservoir on the North Fork Clearwater River near the Expedition route.
The State of Idaho has developed two State parks along the route, one of which commemorates the site of the Expedition's Canoe Camp. An opportunity exists for the local agencies to develop recreation sites along the Lewis and Clark Trail, since there are no county or municipal recreation sites. The State of Idaho, however, should take the responsibility for identifying and marking the location of the route, campsites, and other historic sites outside the National Forest boundaries.
To assist in meeting Idaho's future recreation requirements and to develop a coordinated program to memorialize the route of the Lewis and Clark Expedition, the State should follow the recommended program as outlined in this report.
The recommended routing of a Lewis and Clark Trail Highway in Idaho is indicated on maps 18-20. This routing includes the highway from Lewiston to Lolo Pass.
State and Federal agencies in Idaho responsible for the development of recreation resources must continue to expand existing recreation facilities. One of the more important developments is the proposal to give National Wild River status to the Middle Fork Clearwater River and its tributaries, the Lochsa and the Selway, and to the Salmon River including the Middle Fork Salmon River.
The Expedition's overland trek through Idaho represented a connecting link between the Missouri and Columbia Rivers. The means of transporting supplies changed here from boats to horses. In no other State did they travel such a great proportion of the route on land as in Idaho. And perhaps in no other State did they encounter such a combination of difficult travel and lack of food as they did here. What appeared at first to be just a simple matter of dropping over the Divide and floating down a tributary to the Columbia turned into a long and arduous detour across two additional mountain passes and the Lolo Trail in order to reach a navigable stream. The information available to them concerning the route to follow was meager. Without the qualities of leadership, perseverance, and endurance demonstrated by Captains Lewis and Clark, the Expedition would not have been able to accomplish their objective under such difficult conditions.
The first members of the Expedition to enter what is now Idaho were Captain Lewis and three men who, on August 12, 1805, crossed the Continental Divide at Lemhi Pass. They were looking for the Shoshone Indians from whom they hoped to obtain the horses so essential to their plans. They found the Indians on the following day near the Lemhi River. Lewis was able to pursuade them to accompany him back across the Divide into Montana to meet the rest of the party. There they bargained successfully for the horses with which to cross the mountains.
The rest of the Expedition entered Idaho on August 26, preceded by Clark and 11 men who went ahead in order to determine if the Salmon River were navigable. Clark soon found out that the Salmon River Canyon was an impassable route. This meant that the Expedition would have to detour by way of the Lolo Trail, a route which they had heard about previously from the Indians. After hiring an old Indian and his son as guides, the entire party headed north into Montana, crossing near Lost Trail Pass on September 4.
Lewis and Clark travelled down the Bitterroot Valley and then up Lolo Creek to Lolo Pass where they entered what is now Idaho for the second time on September 13. For the next week they struggled along the Lolo Trail. This, undoubtedly, was the most strenuous part of the whole trip. Snow and freezing temperatures added to their misery. They had difficulty finding their way, and their food supplies were exhausted, except for some "portable soup" and a little bear grease. Because the game on which they depended for food had moved down to lower elevations, they were forced to kill some of their horses in order to survive.
Upon reaching the Clearwater Valley, they established friendly relations with the Nez Perce Indians living there. A camp, which they named Canoe Camp, was established on the Clearwater River across from the mouth of the North Fork Clearwater. There they recuperated from their arduous trip while building canoes for the run down to the Pacific Ocean. Arrangements were made to leave their horses, saddles, and other equipment with the Nez Perce Indians until their return.
Glad to be on water once more, they resumed their journey in the new canoes on October 7. They reached the junction of the Clearwater and Snake Rivers in three days and on the following day, October 11, 1805, they left the present State of Idaho as they floated down the Snake toward the Columbia River.
When the Expedition returned the following spring, they were on horseback again, having abandoned their canoes in favor of horses while still on the Columbia River. They crossed the Snake River to its north bank a few miles downstream from the mouth of the Clearwater. On the following day, May 5, 1806, they entered Idaho land once more as they approached the confluence of the Snake and Clearwater Rivers. From there the route followed the Clearwater Valley upstream to where the Expedition's horses and equipment had been left with the Indians the preceding fall.
While preparing for the trip up the Lolo Trail and waiting at Camp Chopunnish near present day Kamiah for the snow in the hills to melt, the Expedition's leaders made a lasting impression on the local Indians with their medical skill. Using part of their dwindling stock of medicine, they treated the sick and lame Indians who came from miles around as the reputation of the two "practicing doctors" spread.
Anxious to be on their way home, they started out too early and were turned back by deep snow on the ridges in the middle of June. After a short wait they started again on June 24 and this time made a successful, though still difficult, crossing. Five days later they left Idaho behind as they crossed over Lolo Pass into Montana.
The Expedition was in Idaho a total of 96 days; only about a third of this time was used for travel. The remainder was spent in building canoes on the westward trip, and in waiting for the snow to melt on the Lolo Trail the following spring.
For much of its length, the Lewis and Clark Trail in Idaho remains almost the same today as when the Expedition passed by. This is especially true along the Lolo Trail and across the three mountain passes. Even the Clearwater River, where they took to the water again, has changed but little. A small dam operated by the Washington Water Power Company on the Clearwater near Lewiston is the only existing water control facility along the Trail in Idaho. The Dworshak Dam, being constructed by the Corps of Engineers, will be located on the North Fork, a few miles off the actual Expedition route.
The Lolo Trail, scene of so much suffering and hardship on both the outbound and return trips, is one of the most history-laden sites along the Expedition's route in Idaho. Known to the Indians for countless years before the Expedition's arrival, the Lolo Trail also was used by Chief Joseph, leader of the Nez Perce Indians, during the Nez Perce Indian War of 1877. He led his tribe across the trail into Montana in an attempt to escape pursuing army troops. A Forest Service road, passable only during the summer months, follows the Lolo Trail quite closely. Other historic events that took place here after the Expedition passed include the establishment in 1812 of Mackenzie Post, a fur trading post at the mouth of the Clearwater River which was abandoned the following year; and the establishment of Spalding Mission in 1836 by an associate of missionary Marcus Whitman.
The Lolo Trail has been approved for Registered National Historic Landmark status.
The Salmon River, which Clark found to be impassable, is best known today as the "River of No Return." A road goes down the river a few miles past the spot where Clark turned back. Beyond the end of this road expert rivermen run boats down the rapids, carrying passengers through a Forest Service wilderness area.
The routes into Idaho pioneered by Lewis and Clark never proved popular with the road builders that followed. Lemhi Pass has changed the least. It is crossed by an unimproved Forest Service road but otherwise remains much the same. There has been a highway across Lolo Pass for many years but until very recently it did not penetrate far into Idaho. The highway has now been extended down the Lochsa River to meet the road coming from the west. The dedication of this completed highway between Missoula, Montana, and Lewiston, Idaho, was held at Packer's Meadow near Lolo Pass in August 1962. The Expedition had camped in this same meadow 157 years earlier after crossing Lolo Pass on the way west.
Most of the names given by the explorers to the streams, rivers, and other terrain features have disappeared. Lewis' River and the North Fork of Lewis River became the Snake and Salmon Rivers respectively. Colter's River is now the Potlatch River and Colt-Killed Creek has become White Sand Creek. But the explorers are remembered in other ways. Lewiston, Idaho, and Clarkston, Washington, are on opposite banks of the Snake at the mouth of the Clearwater.
The contributions of the Expedition to the State of Idaho are difficult to assess. Perhaps the greatest contribution was the establishment of a claim by the United States to the land now Idaho as a result of the Expedition's passing through.
The explorers established friendly relations with the Nez Perce Indians which smoothed the way for trappers and settlers who came later. Legends about Lewis and Clark persisted for many generations with these Indians, whose descendants still live in the Clearwater Valley.
In Idaho, it is possible to visit several sites which relate directly to the Lewis and Clark Expedition. These sites include Lolo Trail, Canoe Camp, Wieppe Prairie, Long Campsite, Lolo Pass, Lost Trail Pass, and Lemhi Pass. Three of theseLolo Trail on the Clearwater National Forest; Canoe Camp, a State Park facility; and Lolo Passare presently marked with interpretive signs. Lost Trail Pass and Lemhi Pass can be easily reached but interpretive signs are lacking. The two remaining sites, Wieppe Prairie and Long Campsite, are scheduled for development as part of the Nez Perce National Historical Park. Through a joint effort of the State Highway Department, Historical Society, and Department of Commerce and Development, an excellent historical sign program has been established in Idaho. Signs have been installed at Canoe Camp, Lolo Pass, and on the Salmon River where Clark made his exploration of the canyon.
The Forest Service, Bureau of Land Management, Bureau of Indian Affairs, and Corps of Engineers are Federal agencies that presently have administrative responsibility for lands on or near the route. When the recently authorized Nez Perce National Historical Park has been established, the National Park Service also will have administrative responsibility along the Trail.
The State Department of Parks and the State Department of Fish and Game administer recreation and conservation areas along the Expedition route. There are no county, city, or private recreation areas along the Lewis and Clark Trail in Idaho.
Within about 25 miles of the Lewis and Clark Trail in Idaho there are 21 existing and 30 proposed points of recreation interest. Ten areas provide water-based recreation and five additional water-based recreation areas are proposed for development by Federal agencies.
The total area of land and water included within the 51 existing and proposed recreation sites is about 1,242,000 acres. Of this amount, nearly 1,240,000 acres of land and water are included within the Selway-Bitterroot Wilderness. The facilities existing and to be developed provide opportunities for camping, picnicking, hunting, hiking, horseback riding, sightseeing, and nature study. With the exception of two areas where swimming is permitted, water-based recreation at existing sites is limited to fishing.
The tables on pages 156 to 160 list data pertaining to the existing and potential recreation areas, historic sites, and conservation areas along the route of the Lewis and Clark Expedition through Idaho.
To discuss in more detail the recreation resources and attractions along the route taken by the Expedition in Idaho, the Trail has been treated in two sections, which follow:
LEMHI PASS TO LOST TRAIL PASS
Access is good from Lemhi Pass to Lost Trail Pass along the Expedition route. A Forest Service road good only for summer travel leads west from the summit of Lemhi Pass two miles to the Boundary of the Salmon National Forest. From there it is about nine miles by county road to the town of Tendoy on the Lemhi River. At Tendoy, State Highway 28 runs northwest for 21 miles down the Lemhi Valley to Salmon, situated at the confluence of the Lemhi and Salmon Rivers. There U.S. Highway 93 comes in from the south along the Salmon, and continues 21 miles north to where the Salmon meets the North Fork Salmon River. Here the main river turns due west. The Salmon River, including the Middle Fork Salmon, has been proposed by the Administration for National Wild River status. An improved road follows the north bank of the Salmon River downstream for a few miles past the point where Clark turned back. U.S. 93, however, continues north up the North Fork 26 miles to Lost Trail Pass.
The actual route used by the Lewis and Clark Expedition is difficult to determine in this section because they travelled overland. According to most authorities who have studied the route, it closely follows the road system. After leaving the Salmon National Forest just west of Lemhi Pass, the route crosses a sizable area of public domain land between the forest and the Lemhi Valley. Down this valley and the Salmon River Valley beyond, most of the lands are privately owned. At Tower Creek, a few miles short of the North Fork, the route turns northeast up the creek and then cuts across to the North Fork about three or four miles above its mouth. The Expedition route reenters Salmon National Forest a few miles up Tower Creek and remains in the forest all the way to the pass. Approximately two-thirds of this entire section is on public land.
This section of the route is significant because white men first entered Idaho and the first U.S. citizens crossed the Continental Divide at Lemhi Pass. Captain Lewis finally established contact with the Shoshone Indians on the Idaho side of Lemhi Pass, and Captain Clark's exploration of the Salmon River established the fact that the route they had just discovered across the divide was not a suitable connection between the Missouri and Columbia Rivers.
One historic site, three recreation sites, and several potential historic and recreation sites are located in Salmon National Forest. The Bureau of Land Management has identified four potential recreation areas on lands under its control. There are no State, county, or private recreation areas in this section.
The existing Forest Service historic and recreation sites are not sufficient to provide for the anticipated demand. However, if the potential sites identified by that agency and those sites identified by the Bureau of Land Management are developed, either by the Bureau or by State or local authorities, much of the anticipated demand will be accommodated.
LOLO PASS TO THE SNAKE RIVER
There also is good access to the Expedition route from Lolo Pass to the Snake River. U.S. Highway 12, Idaho's Lewis and Clark Highway, enters the State from the east at Lolo Pass. The highway descends for nine miles to the Lochsa River, and then follows the right bank of that river downstream for 69 miles in a southwesterly direction to the junction with the Selway River. These two rivers combine there to form the Middle Fork Clearwater River. This river and its two tributaries, the Lochsa and the Selway, have been proposed by the Administration for National Wild River status.
From this point U.S. 12 goes west 23 miles down the right bank of the Middle Fork to Kooskia. There the Middle Fork meets the South Fork Clearwater to form the main Clearwater River. The highway turns north at Kooskia, follows the right bank of the Clearwater downstream for eight miles, then crosses over to Kamiah on the left bank. From Kamiah the highway runs northwest, then west, for 55 miles along the river to Spalding State Park. There the road crosses back to the right bank and goes west nine miles to Lewiston. In Lewiston, U.S. 12 crosses the Clearwater again and then, at the west edge of town, goes across the Snake River Bridge to Clarkston, Washington.
The Expedition's route was similar to the route of the highway from Lolo Pass down to the Lochsa River and for a few miles down the river. However, Lewis and Clark soon discovered that they had missed the Lolo Trail. Climbing out of the Lochsa Valley, they found the trail along the ridge which parallels the river on the north. A Forest Service road, passable only during the summer months, follows the Lolo Trail almost all the way to the Clearwater Valley. From Canoe Camp, just west of present-day Orofino, downstream to Lewiston, the highway and the Expedition route down the river are very close to each other.
From Lolo Pass west to the edge of the Clearwater Valley the route is on lands of the Clearwater National Forest. The location of the Expedition route across this forest has been determined very accurately by Forest Service personnel. Between the National Forest boundary and Lewiston the route crosses an area composed of intermingled Nez Perce Indian tribal lands and public domain and private lands. Since the location of the route outside the forest has not been determined with the same detailed study and analysis as that used within the forest, the relationship of the route to these lands is not certain.
The Forest Service administers 12 existing recreation sites and has identified two potential ones along or near this section of the Trail. In addition, the Service has marked 10 historic sites related to the Expedition, identified nine other sites, and proposed the establishment of its road along the Lolo Trail as the Lewis and Clark Scenic Highway.
The Lochsa River forms the northern boundary of the Selway-Bitterroot Wilderness, administered by the Forest Service. There are several places along U.S. 12 where foot bridges provide access across the Lochsa River to the trails which penetrate the wilderness area.
The Bureau of Land Management has located four potential recreation areas on lands under its administration in this section. All of these sites are along the main Clearwater River.
The Corps of Engineers has started construction of the Dworshak Dam and Reservoir on the North Fork Clearwater River. The dam site is about two miles upstream from the mouth of the North Fork. When completed, the reservoir will have recreation areas along its shoreline.
It is along this section of the Trail that the Nez Perce National Historical Park will be established. A bill passed by the 88th Congress authorizes the Secretary of the Interior to designate certain lands in Idaho as the Nez Perce National Historical Park. The purpose of such a park will be to facilitate protection and provide interpretation of sites in the Nez Perce country of Idaho that have historic value. The park will include three main interpretive centers on lands to be acquired and administered by the National Park Service and 19 other historic sites owned and administered by various Federal, State, and local agencies, the Nez Perce Tribe, and various private individuals and corporations. These 19 sites will be marked and interpreted under cooperative agreements with the National Park Service to form an integrated series of sites illustrating the entire Nez Perce country story. Five sites pertaining to the Lewis and Clark Expedition will be included in the proposal. These are Lolo Pass, Lolo Trail, Canoe Camp, Wieppe Prairie, and the Long Campsite. Wieppe Prairie is where the Expedition first met the Nez Perce Indians, while Long Camp site is another name for Camp Chopunnish, the site near Kamiah, where they camped for about a month in the spring of 1806.
The State Park Department administers the developed park areas along the Expedition route in this section, one of which relates directly to the Expedition. This is the Lewis and Clark Canoe Camp State Monument, located at the site of the original camp. It is a small park, crowded between the highway and the river. Development consists of a replica of a canoe and a sign explaining the significance of the site. The other park is Spalding State Park, on the south bank of the Clearwater River near the old Spalding Mission site. Although intended primarily for picnicking and swimming, overnight camping is permitted.
It is difficult to determine the demand for recreation areas along the Lewis and Clark Trail in Idaho. To some extent it will depend on the publicity given to the Lewis and Clark Trail Study and to the interest generated by the study in following the Trail or portions of it. Other measures of demand are the growth of population and the anticipated increase in travel along the highways.
The 1960 Bureau of Census report lists 667,191 inhabitants for the State of Idaho. This represented an increase of 13.4 per cent over 1950. Population forecasts indicate that by the year 2000 there will be 1,254,000 inhabitants. The route taken by Lewis and Clark across Idaho passes through two very sparsely settled sections of the State. Lewiston, located at the western end of the Trail across Idaho, had a population in 1960 of 12,691. It is the largest city along the route in this State. Salmon near Lemhi Pass at the eastern end of the Idaho section, listed only 2,944 inhabitants that same year.
No Interstate Highways are proposed along the Lewis and Clark Trail in Idaho. There are Federal, State, county, and Forest Service highways and roads, however, which follow the Expedition route quite closely.
U.S. Highway 12, dedicated as the Lewis and Clark Highway, runs from Lolo Pass down the Lochsa River to Lewiston. It carried an average of 700 cars per day in 1963. The State Highway Department estimates that in 20 years this travel will have increased to an average of 1,400 vehicles per day.
If the Lewis and Clark Trail is developed along the lines recommended in this report, then travel for recreation and historic purposes along U.S. 12 in Idaho can be expected to increase considerably more than that now anticipated. Similar increases can be expected for U.S. Highway 93 and State Highway 28 which follow the Trail closely down the Lemhi and Salmon Rivers and up the North Fork Salmon River to Lost Trail Pass.
The basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Expedition route was outlined in the Recommended Program, page 20. The recommended routing of a Lewis and Clark Trail Highway in Idaho is indicated on maps 18-20. Specific recommendations relating only to Idaho follow:
1. The Wild Rivers bill, which names the Middle Fork Clearwater River, including its Lochsa and Selway Tributaries, and the Salmon River, including the Middle Fork Salmon, as initial units in a National Wild Rivers System, should be enacted.
2. The Forest Service should develop recreation sites along the Lolo Trail within the National Forest for the use of individuals following the Lewis and Clark Trail.
3. The Forest Service should improve its road along the Lolo Trail and across Lemhi Pass to the extent possible without destroying the natural setting of those historic sites.
4. The Bureau of Land Management should continue its program of identifying potential recreation and wildlife conservation areas on public domain lands along the Trail, and should either develop these areas or arrange for their development by State or local agencies.
5. The Corps of Engineers should develop recreation facilities on the reservoir to be formed by Dworshak Dam on the North Fork Clearwater River which will help to fulfill the need for recreation areas along this section of the Trail.
6. The State of Idaho should expand its historic sign program to include identification and marking of the Lewis and Clark campsites and other sites of historic importance where they have not otherwise been identified and marked.
7. The exact route of the westward Trail between the National Forest boundary and Canoe Camp and of the return Trail from Lewiston to the National Forest should be accurately located on the ground. The sites of other historic events that took place along this section of the Expedition route should be located and marked.
8. The planning of hiking, nature, and horseback trails should receive particular emphasis by all agencies since most of the route in Idaho was overland. Development of trails following the route over Lemhi, Lost Trail, and Lolo Passes, and where needed along the Lolo Trail, would provide the opportunity to most closely appreciate and relive the Expedition's experiences.
The Lewis and Clark Expedition's outbound route in Washington led by canoe down the Snake and Columbia Rivers 465 miles to their destination, the Pacific Ocean at the mouth of the Columbia. They reached the Pacific Ocean near Cape Disappointment on the Washington coast. On the return trip Lewis and Clark retraced this route, except for an overland short cut between the mouth of the Walla Walla River and the confluence of the Clearwater and Snake Rivers. Several sites associated with the Lewis and Clark Expedition can still be visited today. These include Beacon Rock, Chinook Point, Cape Disappointment, and the campsite at the mouth of the Snake River.
A large percentage of the original river route soon will be beneath the waters of a continuous series of reservoirs extending for 320 miles from Bonneville Dam upstream to Clarkston, Washington. Most of the recreation opportunities along this section of the Trail will be on these reservoirs. The problem of public access to the river route in Washington is complicated by existing transportation developments and by difficult terrain situations. Moreover, the lands bordering the 145-mile stretch of the Columbia River from Bonneville Dam downstream to the river's mouth and along the overland portion of the return route are almost entirely privately owned.
State and local agencies have developed some recreation areas along the Trail in Washington but much remains to be done to meet the anticipated demand. The facilities existing and to be developed provide opportunities for most forms of water-oriented recreation, and also for camping, picnicking, hunting, hiking, horseback riding, sightseeing, and nature study.
The Washington State Legislature has designated the highway system between Vancouver and Clarkston via Kennewick and Walla Walla as the Lewis and Clark Highway. Lewis and Clark Highway markers have been erected by the State Highway Department. This highway designation should be extended to include the highway system from Vancouver to the mouth of the river.
The State of Washington should continue its program of identifying and marking the Lewis and Clark route, campsites, and sites of related historic events.
There are 42 existing and 27 proposed points of recreation interest within a few miles of the Lewis and Clark Trail in Washington. The total area of land and water included within the 69 recreation sites is almost 63,000 acres.
After the adventures and hardships encountered in Montana and Idaho, the Expedition's travels through the State of Washington must have seemed tame in comparison. There were rapids to contend with and the scarcity of food remained a problem, but they were certain now of their route and they knew that the Pacific Ocean could not be too far away.
On October 11, 1805, soon after leaving their camp on the north bank of the Snake River just below the mouth of the Clearwater River, the Expedition entered what is now the State of Washington. Passing swiftly down the Snake in their newly constructed canoes, in only five days they reached the Columbia River, one of the major objectives of the Expedition. Here they camped for two days in order to trade with the Indians, explore the Columbia for a few miles upstream, and make observations and measurements for mapping purposes. Captain Clark explored upstream on the Columbia to within sight of the mouth of the Yakima River near the present city of Richland.
On October 18 the Expedition left their camp at the mouth of the Snake and started down the Columbia. After passing the mouth of the Walla Walla River they camped on the left bank, not far from the Oregon-Washington border. From here to the mouth of the Columbia the Expedition camped part of the time on the south or Oregon side of the river, but most of the campsites were on the north or Washington side.
A series of three rapids or cascades, which in later years were known as Celilo Falls, the Dalles Rapids, and Cascade Rapids, required difficult and time-consuming portages. Finally, in mid-November, they approached the mouth of the Columbia and accomplished their principal objectiveto reach the Pacific Ocean. The Expedition probably first saw the ocean on November 10, 1805, from the vicinity of Point Ellice, on the Washington shore about five miles from the mouth of the river. Captain Lewis and five men were the first actually to reach the ocean when they explored Cape Disappointment and the coast north of there on November 14, 15, 16, and 17.
The Expedition camped for about two weeks on the Washington shore, near Point Ellice and Chinook Point while exploring the surrounding countryside. These explorations and the advice received from the local Indians convinced the leaders that the best location for a winter encampment would be south of the river. Consequently, on November 26, 1805, the party crossed the river to the Oregon shore.
After spending a miserable winter at Fort Clatsop, the Expedition started for home on March 23, 1806. Traveling by canoe up the Columbia, they retraced their route of the previous fall, stopping frequently to hunt and to barter with the Indians for food. They camped for nearly a week on the Washington shore, across from the mouth of Sandy River, while the hunters scoured the surrounding countryside for game and Clark explored the lower reaches of the Willamette River.
Upstream from this camp they encountered the first of the many rapids and falls which meant difficult and time consuming portages. They decided to obtain horses from the Indians to use instead of canoes for hauling equipment and supplies. It required several days of shrewd bargaining before they had enough horses. Finally, on April 24, they disposed of the last of their canoes and the route once again became overland.
Traveling along the Washington side of the river, they soon reached an Indian camp across from the mouth of the Walla Walla River. Here the party crossed the Columbia with the help of Indian canoes and then, with horses to haul their equipment and an Indian for a guide, headed cross-country on April 30, 1806, in an easterly direction. Their guide led them across the hills and down to the Snake just a few miles downstream from the mouth of the Clearwater. The Expedition crossed the Snake to the right bank and on the following day, May 5, approaching the mouth of the Clearwater, they crossed the present-day border between Washington and Idaho.
The Expedition's travels across the southeast corner of Washington and along its southern border and return took over three months. Except for the cross-country portion of the return route, the explorers did not penetrate very far into the State of Washington. What little information they obtained about the land north of the Columbia and Snake Rivers came from their contacts with the Indians.
Their water route down the Snake, and down the Columbia as far as the last series of rapids, has changed considerably in the intervening years. A group of eight dams and reservoirs constructed, or under construction, by the Corps of Engineers is turning that turbulent river route into a series of long, narrow, quiet pools. The bordering canyons and hills have changed very little, but the sites of the Expedition's camps, and those of the many Indian villages and fishing areas, as well as the numerous islands, rapids, falls, and cascades that the Expedition knew in this section, are fast disappearing under the reservoir waters.
The river route downstream from the series of dams has changed very little, except for the towns, cities, and industries which are scattered along the river bank. The lands bordering the Expedition's overland route from the mouth of the Walla Walla River across to the confluence of the Snake and Clearwater Rivers are now devoted principally to farming and grazing.
The names bestowed by Lewis and Clark on rivers, streams, and other terrain features did not fare any better here than elsewhere along the trail. The Columbia River, of course, was already known and named, as was Cape Disappointment. The Snake River, however, was named Lewis' River by the Expedition. The present-day Lewis River in Washington, a tributary to the Columbia near Woodland, was not named by the Expedition; they used the Indian name, Chawahnahiooks.
The Expedition has been memorialized by the State and other agencies in Washington in several ways. Clarkston, Washington, and Lewiston, Idaho, located on opposite banks of the Snake at its junction with the Clearwater, are named for the explorers. The Corps of Engineers reservoir formed by Ice Harbor Dam on the Snake, and a State park located at the confluence of the Snake and Columbia Rivers both are named after Sacajawea (Sacagawea), the Indian woman who accompanied the Expedition. The Lewis and Clark Trail State Park is located between Dayton and Waitsburg along the Expedition's return route.
The experiences and events concerned with the Expedition's travels in Washington were, perhaps, less significant than those that occurred elsewhere in their journey through the Pacific Northwest.
The contributions of the Expedition to the State also were less than in Oregon, Idaho, and Montana. Ships had been trading with the Indians at the mouth of the Columbia for years and better claims to the adjacent lands had been established by other countries long before Lewis and Clark appeared on the scene. Their explorations of the lower Snake River country and the upper Columbia River area were significant, however, and the friendly relationships that were established with the Indians in that region made it much easier for the trappers and fur traders who followed.
There are several sites in Washington related directly to the Lewis and Clark Expedition which still may be recognized and visited today. These include Beacon Rock, Chinook Point, Cape Disappointment, and the campsite at the mouth of the Snake River. One of these sites, the camp at the mouth of the Snake River, has been flooded by the backwaters of a reservoir. However, Sacajawea State Park, located on the shore of the reservoir, overlooks the campsite. The other three sites are included, at least in part, within the boundaries of Beacon Rock, Fort Columbia, and Fort Canby State Parks.
The Federal agencies which have administrative responsibilities for lands along the Trail in this State include the Corps of Engineers, Bureau of Land Management, Bureau of Sport Fisheries and Wildlife, Forest Service, National Park Service, and Bureau of Reclamation.
The State Parks and Recreation Commission administers several park and recreation areas along the Trail route, some of which relate directly to the Lewis and Clark Expedition. The State Department of Game supervises areas along the route which include three game ranges and three trout and steelhead hatcheries. The State Department of Fisheries administers five salmon hatcheries, a fishway, and a shell fish laboratory on or near the Trail along the lower part of the Columbia River.
By an act of the 1955 Washington State Legislature, the highway network along the Columbia from Vancouver via Kennewick, to the mouth of the Walla Walla River, and from there via Walla Walla and Pomeroy to the Idaho border at Clarkston, Washington, was established as a Lewis and Clark Highway. The State Highway Department has erected Lewis and Clark Highway markers along this officially designated route.
Four countiesBenton, Clark, Franklin, and Walla Wallahave developed recreation areas along the Expedition route. Many of the cities, towns, and villages along the route have parks and some of these municipalities have erected signs or monuments concerning the Expedition. The Pacific Power and Light Company has developed recreation facilities on its three reservoirs on the Lewis River east of Woodland, Washington. The Columbia Historical Pageant, an annual event sponsored by the city of Richland, includes scenes relating to the Lewis and Clark Expedition. It takes place in mid-summer at Columbia Park, a Benton County facility located on the bank of the Columbia east of Richland.
As a result of the dam and reservoir construction activities of the Corps of Engineers on the Columbia and Snake Rivers in Washington, extensive archeological salvage surveys have been accomplished within the reservoir areas prior to flooding. These surveys financed by the Corps were made by teams from the University of Washington and Washington State University. Many of the excavated sites had been occupied by Indians at the time the Expedition was in this region. Archeologic investigations along the lower Columbia River have been less extensive.
In July 1964 archeologists from Washington State University, while relocating an old Indian graveyard at the mouth of the Palouse River, opened a casket made from a dugout canoe. Inside the casket, in a leather pouch buried with the remains, they found an 1801 Jefferson Presidential Medal like those given by Lewis and Clark to important Indian chiefs. On one side is a likeness of President Jefferson with the words, "TH JEFFERSON, PRESIDENT OF THE U. S., AD 1801," around the edge. On the other side are two clasped hands, a peace pipe and a hatchet crossed, and the words "PEACE AND FRIENDSHIP."
There are 69 recreation sites in the general area, with 42 existing and 27 proposed. Some 26 areas provide water-based recreation and 26 additional water-based recreation areas are proposed for development by State and Federal agencies near the Trail. The total area of land and water included within the 69 sites is about 62,765 acres. Of this amount approximately 56,000 acres of land and water are included within the existing sites; 6,500 acres will be within the proposed sites. The facilities existing and to be developed provide opportunities for all forms of water-oriented recreation, and for camping, picnicking, hunting, hiking, horseback riding, sightseeing, nature study, winter sports, and many other attractions.
The tables found on pages 160 to 166 list pertinent data relating to existing and potential recreation areas, historic sites, and conservation areas along or near the route of the Lewis and Clark Expedition in Washington.
For the purpose of discussing recreation resources and needs with particular emphasis on the existing and proposed recreation sites, the route followed by the Lewis and Clark Expedition in Washington is treated in four sections, which follow:
CLARKSTON TO ICE HARBOR DAM
The Expedition route from Clarkston to Ice Harbor Dam includes all but the lower 10 miles of the Snake River portion of the route in Washington.
Four dams have been authorized and are in various stages of construction by the Corps of Engineers along this stretch of of the Snake. Ice Harbor Dam, completed in 1962, is located 10 miles upstream from the mouth of the river. The reservoir formed by this dam has been named for Sacajawea, the Indian woman who accompanied the Expedition. Sacajawea Lake, which has a surface area of 9,200 acres, extends upstream 32 miles to Lower Monumental Dam site. This dam, scheduled for completion by 1967, will back water up to Little Goose Dam site, creating a reservoir nearly 30 miles long. Little Goose Dam site, when completed in 1968, will form a 37 mile long reservoir ending at Lower Granite Dam site. Lower Granite Dam, now in the preliminary construction phase, is expected to impound water by early 1971. This last dam of the series will back water a few miles up the Snake River past Clarkston, which is 32 miles upstream from the dam site.
The Corps of Engineers has developed four recreation areas on Sacajawea Lake, one of which has been named Charbonneau Park after the Indian woman's husband. The Corps has selected 12 potential recreation sites for development on the other three reservoirs in this section. When all of the reservoirs have been filled on this stretch of the river, the adjoining shore-lands will be under the administrative jurisdiction of the Corps of Engineers. It is the Corps policy to lease lands to State and local agencies for recreation and fish and wildlife purposes. Locks at each of the four dams will permit the passage of pleasure craft and commercial boats.
At the present time, the nearest State park is Palouse Falls State Park, located about five miles north of the Snake River on the Palouse River, a tributary to the Snake. The Kamiak Butte State Park is also north of the Snake near Pullman. No existing county or local parks are along this section of the Trail except those in the cities of Clearwater, Colfax, Dayton, and Pullman. Columbia County has identified a potential park on the Snake River at the mouth of the Tucannon River.
No highways or roads closely parallel the Snake in this section for any great distance, but railroads follow both banks of the river upstream for about 65 miles and then along the north or right bank as far as Lewiston, Idaho, across from Clarkston, Washington. U.S. Highway 410 follows the river west of Clarkston for about seven miles to where the river turns sharply to flow north. U.S. Highway 295 crosses the river about 55 miles downstream from Clarkston on the only bridge between the one at Clarkston and the road across Ice Harbor Dam at the lower end of this stretch of river. Lyons Ferry, crossing near the mouth of the Palouse, will be replaced with a highway bridge upon completion of Lower Monumental Dam. County roads provide access to the river at several points on both banks of the river. When the other dams are completed and recreation areas have been developed on the new reservoirs, the access situation undoubtedly will be much improved along this section of the Trail.
The existing and planned Corps of Engineer recreation developments on Sacajawea Lake and its plans for development on the other three reservoirs should provide sufficient opportunities for access and recreation use of this section of the Lewis and Clark Trail for the foreseeable future.
ICE HARBOR DAM TO BONNEVILLE DAM
The Expedition route from Ice Harbor Dam to Bonneville Dam includes the lower ten miles of the Snake River and a 175-mile portion of the Columbia River extending down to the upper limit of tidewater on the river.
At the time the Expedition passed by, this part of the Columbia contained falls, cascades, and rapids. Now John Day Dam, the last in a series of four dams being constructed by the Corps of Engineers on this stretch of the Columbia, is nearing completion. The first dam in this series, completed in 1943, is Bonneville, located at the site of the rapids or cascades farthest downstream on the Columbia. The 20,600-acre reservoir formed by this dam backs water 47 miles up the river to The Dalles Dam, completed in 1957. The Dalles Dam, in turn, forms an 11,000 acre reservoir which extends 24 miles upstream to the site of John Day Dam. When this dam is completed in 1968, it will inundate the 75-mile stretch of river between John Day and McNary Dams. The reservoir created by McNary Dam in 1953 is 61 miles long and has a surface area of 38,800 acres. It extends up the Columbia almost 30 miles past the mouth of the Snake and also backs up the Snake 10 miles to Ice Harbor Dam.
Except for a view point and visitors' building at the dam, the Corps of Engineers has developed no recreation areas on the Washington shore of Bonneville Reservoir. The Corps has developed and administers recreation sites on both The Dalles and McNary reservoirs, however, and has authorized such developments by State and local agencies. The Corps also has located three potential recreation areas on McNary Reservoir, two of which will be administered by Benton County. The Corps identified eight sites that will have potential for recreation development when the new John Day Dam reservoir is filled. Locks in all four of the dams along this section will permit passage of recreation boaters as well as commercial boat traffic.
The Bureau of Land Management has identified several tracts of public domain lands with recreation potential along this section of the Expedition route. There is a small segment of Gifford Pinchot National Forest bordering the Trail along the north shore of Bonneville Reservoir. Its potential for recreation development is limited because of problems of terrain and access; however, one potential recreation development site has been identified. The Forest Service's Cascade Crest Trail, which follows the Cascade Range across Washington and Oregon, crosses the Columbia River at this point.
The Bureau of Sport Fisheries and Wildlife administers McNary National Wildlife Refuge, located east of McNary Reservoir and south of the Snake River. This refuge includes about 2,000 acres of land and water. The Bureau has plans for a wildlife refuge on the John Day Reservoir when it is completed; to be called the John Day Waterfowl Management Area. It would include about 7,500 acres of land on the Washington shore of the reservoir.
The State and local recreation areas located on McNary reservoir include Sacajawea State Park, at the mouth of the Snake River; Columbia Park (Benton County); Chiawana Park (Franklin County); Hood Park (Walla Walla County); and Riverside Park (City of Richland). Benton County has expressed interest in developing two potential park areas on McNary Reservoir. The Corps of Engineers administers Wallula Park at the mouth of the Walla Walla River on McNary Reservoir.
The State Game Department administers the 8,945-acre McNary Game Range, located along the east shore of McNary Reservoir and south of the mouth of the Snake River. This area provides opportunities for waterfowl and upland bird hunting, fishing, and boating.
At The Dalles Reservoir, the Corps of Engineers has developed three recreation areas on the Washington shore. Two of these, Maryhill Park and Avery Area, also are administered by the Corps. The other area, Horsethief Lake Park, has been turned over to the Washington State Parks and Recreation Commission for administration. The Corps has plans for development of one other recreation area at this reservoir. North of the reservoir about 16 miles is another State park area, Brooks Memorial State Park, located along U. S. 97.
The road across the top of Ice Harbor Dam connects with roads along both sides of the Snake River from the dam down to the mouth of the river. U.S. Highway 395 follows the left bank of the Columbia from Pasco to the Oregon border. U.S. Highway 410, which begins at Clarkston, comes down the Walla Walla River and then turns north along the Columbia to Pasco. Here it crosses the Columbia to Kennewick, and then follows the right bank of that river upstream to the mouth of the Yakima River State Highway 12 starts at Kennewick, heads overland, south, to McNary Dam, and then turns west to follow the right bank of the Columbia downstream as far as Maryhill. There U.S. Highway 830 takes over and continues along the river almost to its mouth. These last two highways, together with the portion of U.S. 410 from Kennewick to the mouth of the Walla Walla River, are a part of the State's Lewis and Clark Highway.
In the 10-mile stretch of the Snake included in this section, there is a bridge at the mouth of the river and a road across the top of Ice Harbor Dam. On the Columbia, between the mouth of the Snake and Bonneville Dam, there are five toll bridges and one ferry crossing.
Railroads parallel the entire route through here, running close to the water between the highway and the river on both sides of the Columbia and along the north bank of the Snake. This complicates the problem of access, and interferes with the development of otherwise suitable recreation sites.
A large sign, erected by the Kennewick Women's Club, marks the farthest point upstream on the Columbia River reached by members of the Expedition. A State Highway Department sign near Sacajawea State Park pays tribute to Sacajawea, the only female member of the Expedition.
The existing and planned recreation areas along this section are not sufficient to meet the demand. Access is especially inadequate along the Washington side of Bonneville Reservoir where there are no public recreation areas. Improvement of access is impeded by the shortage of suitable development sites and the railroads which lie between the highway and the water.
BONNEVILLE DAM TO THE PACIFIC OCEAN
The Expedition route from Bonneville Dam to the Pacific Ocean includes the final portion of the Expedition's westward route down the Columbia River to its destination. Except for the cities, towns, and industries located on the river banks, this section of the Expedition route remains much the same today as when the explorers passed this way.
The Corps of Engineers has improved the Columbia River channel to a depth of 35 feet as far upstream as the mouth of the Willamette River, and to 30 feet as far as Vancouver. From Vancouver to Bonneville Dam the channel depth has been improved to 27 feet. Bonneville Dam, the farthest downstream of all the dams on the Columbia, is at the upper end of tidal water on the river, 145 miles from the Pacific Ocean. Captain Clark noted the effect of the tide here in his journal entry for November 2, 1805.
A good system of highways follows the Columbia along this section, but access from these highways to the river needs to be improved. U.S. Highway 830 runs west along the river from Bonneville Dam to Vancouver. This section of the highway is a part of the State's Lewis and Clark Highway which begins at Vancouver and goes east. At Vancouver, U.S. 830 joins Interstate Highway 5 and U.S. Highway 99, then, still following the river, turns north to Kelso. At Kelso, Interstate 5 and U.S. 99 continue north, but U.S. 830 turns west with the river as far as Skamokawa, 30 miles from the river's mouth, before turning away. Near the ocean, State Highway 401 and U.S. Highway 101 follow the lower 15 miles of the river to Ilwaco. State and county highways provide access from Ilwaco to Cape Disappointment and the ocean beaches.
There is a bridge across the river at Vancouver, and another near Longview, Washington. At Cathlamet a bridge, island, and ferry combination cross the river, while at the mouth of the river a ferry, soon to be replaced by a bridge, provides the fourth crossing in this section. The new bridge will be named the Lewis and Clark Bridge.
A railroad follows the river downstream as far as Longview. Because the railroad is between the river and the highway for most of this distance, public access to the river is impeded.
The concentration of population in the metropolitan Portland, Oregon, area has created a demand for recreation which exceeds available supply. The only Federal area providing recreation opportunities along this section of the Trail is Fort Vancouver National Historic Site, administered by the National Park Service at Vancouver. This 90 acre historical area commemorates the establishment in 1824 of Fort Vancouver, a Hudson's Bay fur trading post. It includes the site of a new fort, built in 1829 about a mile west of the original fort, and occupied by the Hudson's Bay Company until 1860.
Of the five State parks along the Trail in this section, four include or adjoin areas related to the Lewis and Clark Expedition. Beacon Rock, a huge monolith located just a few miles below Bonneville Dam, was first noted and named by Lewis and Clark; it is now the principal feature of Beacon Rock State Park. Fort Canby State Park is on Cape Disappointment, a headland named by the explorer John Meares in 1788. Lewis and Clark knew of the cape's location and they recognized and explored it in 1805. Ft. Columbia State Park and museum on Chinook Point overlooks the site where the Expedition camped for two weeks while exploring that region. The campsite itself is marked by a sign at Lewis and Clark Campsite State Park. The other State park in this section is Paradise Point State Park, about five miles off the Expedition route where Interstate 5 and U.S. Highways 830 and 99 cross the east fork of Lewis River. Located farther upstream on the east fork of the Lewis River is Clark County's Lewisville Park, the only county park along the Trail in this section.
The State Department of Game administers three trout and steelhead hatcheries along this section of the Trail and the Department of Fisheries has five salmon hatcheries and a fishway. There are two small-boat basins on the Washington side of the Columbia which contribute to the recreation picture. These are the Ilwaco Small-Boat Basin, administered by the Port of Ilwaco, and the Chinook Small-Boat Basin, administered by the Port of Chinook. A similar facility has been proposed for development at Cathlamet by the Wahkiakum County Port District No. 1.
The Pacific Power and Light Company has developed five recreation areas on its three reservoirs located on the Lewis River east of the Expedition route. Only the two areas on Lake Merwin, the reservoir nearest the Trail, are included in this study.
There are several markers and monuments related to the Expedition's experiences along this section of the route. A sign near Chinook identifies the general location of their campsite west of Chinook Point. A rock monument and sign at Long Beach mark the approximate spot where a tree once stood upon which Captain Clark carved his initials on November 19, 1805.
WALLA WALLA RIVER TO CLARKSTON
This section includes the portion of the Expedition's return route from the mouth of the Walla Walla River overland to where it crossed the Snake River, just a few miles downstream from Clarkston. The exact location of the Expedition route is difficult to determine because the party followed old Indian trails which have long since disappeared.
Most of the land bordering the Expedition's route is privately owned and devoted to farming and grazing use. U.S. Highway 410, which follows the route quite closely, has been designated as the Lewis and Clark Highway by the State legislature. Beginning at the mouth of the Walla Walla River, U.S. Highway 410 heads east along that river to the Walla Walla, then turns northeast through Waitsburg, Dayton, and Pomeroy to Clarkston, on the Snake River.
The Corps of Engineers administers Wallula Park, located at the mouth of the Walla Walla on the shore of a bay formed by the backwaters of McNary Dam. The Bureau of Reclamation has proposed the construction of three irrigation reservoirs along this section of the Expedition's route. The only one of these three projects with recreation potential is Dayton Dam.
Whitman Mission National Historic Site, located just west of Walla Walla and administered by the National Park Service, commemorates the establishment of a mission among the Cayuse Indians by Dr. Marcus Whitman in 1836. The 98 acre site includes the area where the mission stood, and the grave of Dr. and Mrs. Whitman and 11 others, all of whom were killed by Indians who attacked and destroyed the mission in 1847.
Lewis and Clark Trail State Park, located on the Lewis and Clark Highway between Waitsburg and Dayton, is the only State park facility along this section of the Expedition route. W. T. Wooten Game Range, administered by the Washington Game Department, is located a few miles south of the Expedition's route near Dayton. Its 11,235 acres are dedicated to wildlife management, providing opportunities for upland bird and big game hunting, trout fishing, and camping. In addition, lands and buildings within the game range have been made available to the Washington State Parks and Recreation Commission for organized group camping purposes. This facility is known as Camp Wooten. Asotin Game Range, comprising 6,357 acres, is located several miles south of the Expedition route in the edge of the Blue Mountains, south and west of Clarkston.
The city of Walla Walla, though not on the route of the Expedition, is situated on the Lewis and Clark Highway. It has a city park which provides facilities for overnight camping.
There are no county parks located along this section, but, some of the counties have erected signs and markers related to the Expedition. Both Garfield and Asotin Counties have put up informational signs where the Lewis and Clark Highway crosses the county line. A monument on the lawn of the Columbia County Court House in Dayton marks the Lewis and Clark Trail.
The city of Waitsburg has erected a sign at the edge of town proclaiming its location on the Lewis and Clark Trail, and another sign at the local high school commemorates the Expedition and some of its members.
If the Bureau of Reclamation's proposed Dayton Dam is constructed and if recreation facilities are installed on the reservoir, it would help meet the demand for recreation opportunities along this section of the Expedition route. A need exists for a recreation area in the vicinity of Pomeroy, and for another along the Snake River below Clarkston.
The Corps of Engineers' potential areas along the Snake near Clarkston, which might be suitable for development when Lower Granite Dam and Reservoir are completed, would help provide for the needs in this section.
The State-designated Lewis and Clark Highway in this section needs to be relocated in some places, using existing State Highways, in order to follow more closely the Expedition's return route.
Demand for recreation areas along the Lewis and Clark Trail in Washington will depend largely on the interest aroused in the project and on the quality of the effort made to identify, mark, and develop areas along the route. Some indication of demand can be found in population projections for the State as a whole and to trends in travel on the highways which parallel the route.
The 1960 Bureau of Census report for Washington lists 2,853,214 inhabitants, an increase of almost half a million, or 19.9 percent, over the 1950 census. Population projections indicate that by the year 2000 the number of inhabitants will have increased to 6,378,000. The largest city along the Trail in Washington is Vancouver, with 32,464 inhabitants in 1960. Clarkston, at the eastern end of the route across the State, had 6,209 inhabitants that same year. Other principal cities along the route in Washington and their 1960 census totals are Pasco, 14,522; Kennewick, 14,244; Richland, 23,548; Longview, 23,349; and Kelso, 8,379.
Washington is a popular target for vacationists. Mount Rainier and Olympic National Parks, the five National Forests along the Cascade Mountains with their wilderness areas, the waters of Puget Sound, a system of more than 100 State parks, and the Pacific Ocean beaches are a combination that attracts recreationists from all over the country. A very high percentage of these tourists will use highways that parallel or intersect the Lewis and Clark Trail. If additional facilities are provided along the Trail and if the route is publicized and promoted properly, it could become a popular tourist attraction. This would increase considerably the demand for recreation sites along the Trail.
The Interstate Highway System plan for Washington involves three highways, two of which cross the Lewis and Clark Trail. Interstate Highway 5, which runs from Mexico to Canada, crosses the Columbia River between Portland, Oregon, and Vancouver, Washington and then parallels the Trail downstream (north) on the Washington shore for 40 miles until the river turns west again. Highway 82, which takes off from Interstate 80N at Pendleton, Oregon, will cross the Columbia near McNary Dam and then continue on to Seattle. This Interstate Highway route has its beginning as Interstate 80 on the east coast.
Without doubt, these Interstate Highways when completed will prove popular with recreationists as a means of getting quickly to their vacation destination. Such highways will not be good "recreation roads," however, because of high speed traffic and inability to leave the highway except at certain designated access points. As a result, demand may increase for recreation areas on the Washington side of the Columbia River across from Interstate Highway 80N, which runs along the Oregon side of the river for about 165 miles upstream from Portland. Recreationists may prefer the more leisurely pace along the Washington side of the river, especially if additional recreation facilities are provided at suitable intervals.
The principal problems affecting the recreation use of the Lewis and Clark Trail in the State of Washington are related to topography. The route used by the Expedition on its westward journey followed the narrow confines of the canyons of the Snake and Columbia Rivers. These natural avenues of travel were used later by the railroads and highways. Still later, because of the hydroelectric power potential in the river and the need to improve water transportation, dams were built. Consequently, most of the readily accessible lands which would be best suited to recreation purposes have been used for highways, railroads, or reservoirs. The problem now is to find suitable or adaptable recreation sites, and to obtain access, since often either a railroad runs between the highway and the river, or a precipitous cliff lies in the way. The access problem can best be solved by cooperative efforts of the Federal, State, local, and private agencies involved. Washington is now preparing a State Recreation Plan which will consider similar problems throughout the State and could provide the needed impetus for a solution.
A basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Expedition route appears in the Recommended program, page 20. The recommended routing of a Lewis and Clark Trail Highway in Washington is indicated on maps 20-23. Specific recommendations relating only to Washington follow:
1. The Corps of Engineers should work with the State and local agencies to provide recreation sites and improved access along the north shore of Bonneville Reservoir.
2. The Corps of Engineers should continue its program of identifying and developing recreation areas on its new reservoirs on the Columbia and Snake Rivers.
3. The Bureau of Sport Fisheries and Wildlife should proceed with its plans to develop the proposed John Day Waterfowl Management Area, including the acquisition of needed additional lands, in order to provide waterfowl benefits and improve waterfowl hunting opportunities along the Lewis and Clark Trail.
4. The Bureau of Land Management and the Forest Service should continue to study the recreation potential of lands under their respective jurisdictions along the route of the Lewis and Clark Expedition in the State, and develop those sites suitable for recreation.
5. The Bureau of Reclamation should develop recreation sites on the reservoir behind Dayton Dam in the event that this project is constructed.
6. The State and local agencies should provide additional recreation areas, signs and markers, and improve access along the Columbia River between Bonneville Dam and the mouth of the river.
7. The local agencies should provide additional recreation opportunities and markers along the return route of the Expedition between the mouth of the Walla Walla River and Clarkston, Washington.
8. Some galleries in existing museums along the Trail should be oriented toward the Lewis and Clark exploration.
The Lewis and Clark Expedition traveled by canoe down the Columbia River in the fall of 1805 enroute to the mouth of the river and their destination, the Pacific Ocean. They spent the winter at a site they named Fort Clatsop, close to the ocean and the mouth of the river. On the return trip in the spring of 1806, their mode of transportation was again canoe until they reached the vicinity of present-day The Dalles, Oregon. Here they abandoned canoes in favor of horses, crossed to the Washington shore, and continued their journey eastward along the north bank of the Columbia River. Many of the sites directly associated with the Lewis and Clark Expedition can be visited today. These include Hat Rock, Fort Clatsop, the Salt Cairn and Tillamook Head.
The upper 120 miles of the Expedition's Columbia River route soon will be entirely flooded by reservoirs when the John Day Dam, now under construction, joins Bonneville, The Dalles, and McNary Dams to complete the series. The shorelands of these four reservoirs offer opportunities for needed recreation areas and facilities to provide for water-oriented recreation activities. However, problems of access and shortage of suitable terrain complicate the picture.
The Columbia Gorge section of the Columbia River is not only the most scenic portion of the Expedition's route in Oregon but is also the most highly developed from the standpoint of recreation areas and facilities. These include developments by the Forest Service on lands of the Mt. Hood National Forest and a concentration of State parks and recreation areas.
Demands by recreationists from nearby metropolitan areas who are attracted by the scenic qualities of the Gorge, however, have generated an even greater need for development.
The lands bordering the 145 mile section of the Expedition's river route downstream from Bonneville Dam are almost entirely privately owned. This section is much more heavily populated than the area upstream and there are only a few recreation areas at present.
The area around Fort Clatsop and south along the ocean to Cannon Beach, Oregon, (which was as far south as members of the Expedition traveled) receives heavy recreation use at the present time. This recreation demand will be even greater when the bridge replacing the auto ferry is completed across the mouth of the river. Additional recreation areas will be needed here as well as improved access to the ocean beaches.
There are 65 existing and 26 proposed recreation sites along the Lewis and Clark Trail in Oregon. Together these sites provide over 97,000 acres of land and water, making possible all forms of water- and land-oriented recreation.
The Lewis and Clark Expedition entered the last state of their epic westward explorations when, on the second day of floating down the Columbia River, they touched the shores of what was to become the State of Oregon. During that day they observed and named Hat Rock, a peculiar basaltic outcropping on the south side of the river, according to Captain Clark's field notes. The campsite that night is believed to have been on the south bank of the river a few miles downstream from the mouth of the Umatilla River.
During the next three weeks, as the Expedition continued by canoe down the river to its mouth, they camped at four other sites on the Oregon shore. They were the first white men to see the John Day and Deschutes Rivers. They named the former river Lepage after one of the men and used the Indian name, Towornehiooks, for the latter. The Willamette River was missed entirely.
The many rapids and falls encountered along the upper part of the Columbia River slowed their progress considerably. The portages around Celilo Falls and present-day The Dalles were difficult but did not cause too much trouble. Finally, on November 2, 1805, they passed the last series of rapids on the Columbia. A week of travel brought them from here to the mouth of the river, a distance of about 145 miles.
After reaching the vicinity of the ocean, the Expedition camped for two weeks on the Washington shore while looking for a suitable winter campsite. They decided that the south side of the river offered better opportunities for a camp, and on November 26, 1805, they crossed to the Oregon shore. A good site was found on the left bank of a river now called the Lewis and Clark River which empties into Young's Bay, south and west of present-day Astoria, Oregon.
Construction was started immediately on a fort consisting of seven cabins and a stockade, and called Fort Clatsop after a local Indian tribe. They moved in just before Christmas, even though the buildings were not completed.
The winter at Fort Clatsop was spent in preparation for the long trip home. A salt works was set up on the coast and a good quantity of salt was obtained by boiling sea water. The hunters brought in elk which provided food as well as material for making clothes. A trip across Tillamook Head to obtain oil and blubber from a stranded whale was not too successful. The Indians already had salvaged everything of value and the explorers had to give part of their fast dwindling supply of trade goods in exchange for a small quantity of whale oil.
The long winter was hard on the health and morale of the men, so when spring came they were anxious to get started on the long trip home. On March 23, 1806, traveling by canoe, the party headed up the Columbia River.
For the first part of the return trip the group stayed close to the Oregon shore. Not knowing that the mouth of the Willamette was concealed behind a group of islands, they again passed it by. However, while camped further upstream for a week to replenish their food supply by hunting, they learned from local Indians of the Willamette's existence and location. Captain Clark returned with a small group and explored the lower reaches of the Willamette River about as far upstream as the city limits of present-day Portland, Oregon. He sighted and named Mt. Jefferson while near the mouth of the Willamette.
When the Expedition resumed its journey, the rapids on the Columbia proved even more troublesome than they had been the previous fall because of the higher spring water level. To avoid the difficult portages and to make better time, they decided to trade with the Indians for pack horses to haul their supplies and equipment. The Expedition camped for three days near present-day The Dalles, Oregon, at a place they called Rock Fort, while Captain Clark crossed the river to engage in horse-trading activities with the Indians. Then, on April 18, 1806, the main group crossed to the Washington shore where, traveling overland along the north bank of the river, they continued their upstream journey.
The Expedition spent nearly four and half months in Oregon altogether. Of this time, nearly four months were at or near Fort Clatsop while the remaining two weeks were spent at camps on the Oregon shore during the outbound and return journey on the Columbia River. Except for the hunting excursions near Fort Clatsop, the trip to see the whale, and the exploration of the lower Willamette River, the Expedition did not see much of the State of Oregon.
The first part of the Expedition's river route along a portion of Oregon's northern boundary has changed considerably during the intervening years, due principally to the construction of dams along this stretch of the Columbia River. What was formerly a series of turbulent rapids and falls when the Expedition passed this way will become four consecutive quiet pools when the last of four power dams is completed in the near future. Hat Rock, though, was not flooded by the backwaters of McNary Dam. It remains an easily recognized landmark and the principal feature of Hat Rock Park on the shore of McNary Reservoir.
The lower section of the river route is still much the same except for the cities, towns, industries, highways, and railroads which occupy much of the river bank. The immediate area around the site of Fort Clatsop is being restored to a natural state. The site of the salt cairn and the beach where the whale was stranded are now within the city limits of Seaside and Cannon Beach, respectively. Tillamook Head, or Clark's Point of View, probably has changed the least of all. Still wild and rugged, it is much the same as the day Captain Clark stood on top of the bluff and marvelled at the view of the Pacific Ocean.
Most of the names given by the explorers to the Oregon landmarks have disappeared. Clark's Point of View became Tillamook Head, and Ecola or Whale Creek has been changed to Elk Creek. Likewise, the large bay near the fort which they named Meriwether Bay is now called Young's Bay, while Point William, on the Columbia, is now Tongue Point. But some of their names have remained: Hat Rock, Mount Jefferson, Fort Clatsop.
All visible traces of Fort Clatsop have long since vanished. When the Expedition started for home in March of 1806 they presented the fort and its furnishings to a local Clatsop Indian chief. Traces of the fort persisted for many years after the Indians were gone; many of the early settlers mentioned seeing it. The present replica of the fort is located as close to the original site as studies of the journals and maps could determine.
The State of Oregon has remembered the leaders of the Expedition in several ways. The river flowing past the fort site, which the Expedition called by its Indian name, Netul, has been renamed Lewis and Clark River. The bridge which is being constructed across the mouth of the Columbia River will be called Lewis and Clark Bridge. Farther upstream on the Columbia, on the east side of the Sandy River, is Lewis and Clark State Park. Lewis and Clark College, located in Portland, Oregon, also commemorates the Expedition's leaders. Camp Meriwether, a Boy Scout camp, is located on the Oregon coast south of Tillamook, Oregon. The camp contains a replica of a Mandan Indian earth lodge and the pageant put on by the Scouts each week during the summer for the benefit of new arrivals includes appropriate references to the Expedition.
The Expedition's contributions to the State were considerable. Their experiences and discoveries, which were well publicized after the Expedition's return to civilization, led John Jacob Astor to establish a fur trading post, Fort Astoria, at the mouth of the Columbia. This American establishment, which was constructed in 1811 just five years after Lewis and Clark left Fort Clatsop, undoubtedly strengthened claims of the United States to the Oregon Territory.
The Lewis and Clark Expedition traversed about 310 miles of the Columbia River which now forms the boundary between Oregon and Washington. It is possible to visit several sites in Oregon which relate directly to the Lewis and Clark Expedition. These include Hat Rock, Fort Clatsop, the Salt Cairn and Tillamook Head. Three of these sites, Hat Rock, Tillamook Head, and Fort Clatsop, are within the boundaries of established State or Federal park areas. The site of the Salt Cairn is marked by a monument erected by local civic groups.
Because of the outstanding attractiveness of the river, a "recreation ribbon" of parks and other recreation sites paralleling the river has been developed, particularly in the area of the Gorge between Sandy River and The Dalles.
There are 65 existing and 26 proposed recreation sites along the river. Thirty provide some form of water-based recreation; 18 additional water-based recreation areas are planned by State and Federal agencies near the Trail. Over 97,000 acres of land and water are included in the 91 sites. All forms of water-and land-oriented recreation opportunities are offered.
A detailed list of existing and proposed points of recreation and historic interest along the Trail, and pertinent information concerning each, are found in the tables on pages 162 to 166.
The State Parks and Recreation Division of the State Highway Department administers 30 State parks and waysides along the route of the Lewis and Clark Expedition in Oregon. These areas, most of which are located along the Columbia River, enable recreation uses such as picnicking, camping, swimming, and boating. Some of the parks and waysides contain excellent scenic viewpoints and natural features. The total attendance at these parks in 1963 was 2,467,333 day-use visitors. Two of the State parks are located on sites related to the Expedition. Ecola State Park, on the Oregon Coast, includes Tillamook Head, or Clark's Point of View. The other park is Lewis and Clark State Park, on Sandy River not far from its confluence with the Columbia. While not located on the site of an Expedition camp, this park is related to their explorations of the lower reaches of Sandy River, which they called Quicksand River.
Oregon has an excellent historical sign program under the direction of the State Highway Commission, working in cooperation with the Oregon Historical Society. Seven signs along the Lewis and Clark Trail in the State point out sites or describe events related to the Expedition's travels. The State Parks and Recreation Division has plans for expanding its interpretive program at Ecola, Lewis and Clark, and Fort Stevens State Parks to commemorate experiences of the Expedition in or near those areas.
The State Game Commission and the State Fish Commission administer numerous areas along the Expedition's route in Oregon. These areas provide recreation opportunities, public access, and fish and game protection, as well as contributing to sport fish and game production and to the commercial fish harvest. At each of the dams on the Columbia River along Oregon's northern boundary, fish ladders have been constructed by the Corps of Engineers to permit the passage of anadromous fish. Facilities also include opportunities for the public to observe the fish using these ladders. The reservoirs behind the dams and the 145-mile stretch of river below the first dam on the river form important Sport and Commercial fisheries as well as good waterfowl habitat.
The State Game Commission administers three game management areas along the Lewis and Clark Trail. Two of these, Government Island and Sauvie Island, are islands in the Columbia River which the explorers mentioned in their journals. They camped overnight on Government Island, which they called Diamond Island, in November 1805. Public hunting and fishing are permitted on the islands and, in the case of Government Island, opportunities are provided for boating, water skiing, and picnicking. The other game management area is Fort Stevens, on the Oregon Coast not far from Fort Clatsop.
The State Fish Commission operates four salmon hatcheries along the Trail, all located on tributaries to the Columbia River. These facilities supply salmon to assist in maintaining the Columbia River runs, and also provide an opportunity for the public to view hatchery operations. In addition to the State Fish Commission hatcheries, there are two hatcheries operated by the State Game Commission located along the Trail.
The Columbia Gorge Commission, a three-member State Commission, was created by the Oregon Legislature in 1953. It is charged with ". . . preserving, developing and protecting the recreation, scenic, and historic areas of the Columbia River Gorge..." This commission has been active in acquiring lands for public use and in coordinating other agency programs in the area lying between the Sandy River on the west and Celilo on the east. Because the Gorge is closely associated with events and experiences of the Lewis and Clark Expedition, the Commission's activities and programs contribute immeasurably to the goals of the Lewis and Clark Trail Study.
Several of the counties and municipalities along the route of the Expedition in Oregon have developed parks, recreation areas, and historic sites.
Archeologic salvage surveys were conducted at the several Corps of Engineers reservoirs along the Columbia River in Oregon prior to their construction. Similar investigations along the Columbia between the mouth of the river and Bonneville Dam have not been as extensive.
The National Park Service, Forest Service, Corps of Engineers, Bureau of Sport Fisheries and Wildlife, Bureau of Land Management, and Bureau of Reclamation have administrative responsibilities along the Expedition route in Oregon. The area administered by the National Park Service is Fort Clatsop National Memorial, the Expedition's campsite during the winter of 1805-1806 and one of the most significant sites along the Lewis and Clark Trail.
To facilitate the discussion of the recreation resources and needs along the Lewis and Clark Trail with particular emphasis on existing and proposed recreation sites, the westward Trail across Oregon is treated in four sections, which follow:
OREGON-WASHINGTON BORDER TO THE DALLES, OREGON
A series of three Corps of Engineers dams and reservoirs are involved in this 120 mile section of the Expedition's route down the Columbia River. One of these, the John Day Dam, is still under construction. Beginning at the downstream end of the section, the first dam is The Dalles Dam, completed in 1957. This 9,400 acre reservoir extends 24 miles up the Columbia to the John Day Dam, located just below the mouth of the river from which it takes its name. When the John Day Dam is completed in 1968, it will form a reservoir 75 miles long, extending upstream to McNary Dam. The reservoir formed by the backwaters of McNary Dam, completed in 1953, extends 20 miles upstream to the Washington-Oregon boundary and 40 miles beyond.
Lake Celilo, the reservoir formed by the Dalles Dam, flooded Celilo Falls, the historic Indian fishing site on the Columbia River. The opportunity to view the Indians fishing from their precarious platforms extending out over the falls was a popular tourist attraction. This ancient practice still survives on the Deschutes River.
Two parks are located on the Oregon shore of Lake Celilo. One of these is Celilo Park, developed by the Corps of Engineers in cooperation with Wasco County and administered by the county under a lease agreement with the Corps. It is near the site of Celilo Falls and offers day-use recreation facilities for picnicking, swimming, boating, fishing, and water skiing. The other is Deschutes River State Park, on the Deschutes River arm of the reservoir. It has limited use at present as there are no facilities; these are planned, however, and will provide for day-use recreation activities such as picnicking, boating, swimming, and fishing. The Corps of Engineers has identified two potential recreation areas on the south shore of Lake Celilothe Rufus Area and the Biggs Area, both located near the upper end of the reservoir.
Three recreation areas are located below The Dalles Dam. One of these is Seufert Park, administered by Wasco County under a lease agreement with the Corps of Engineers. The park has a half mile of shoreline on the river but the shoreline is hazardous and not considered suitable for public access. An abandoned cannery building is utilized for a museum of local archeologic and historic exhibits, and for other civic functions. The county proposes to develop picnic facilities, playgrounds, and additional roads and parking. Another recreation area is The Dalles Small Boat Basin, administered by the Port of The Dalles in cooperation with the Corps of Engineers. Recreation facilities here include a boat ramp, protected boat moorage, and marine supplies. The third area is The Dalles Viewpoint and Memorial, administered by the Corps of Engineers. The viewpoint provides a panoramic view of The Dalles Dam and Lake Celilo. The memorial marks entombed Indian remains removed from the reservoir area prior to flooding.
Preliminary planning for the reservoir to be formed by John Day Dam has included the identification of seven potential recreation areas and three potential wildlife management areas on the Oregon shore. One of the latter, the proposed John Day Waterfowl Management Area, would be administered by the Bureau of Sport Fisheries and Wildlife. It would include lands on both the Oregon and Washington shores of the reservoirs as well as the waters lying between.
The 20 mile stretch of McNary Reservoir shoreline between the dams and the Oregon-Washington border contains two recreation areas. One is McNary Beach, administered by the Corps of Engineers, which provides facilities for picnicking, boating, and swimming. The other area is Hat Rock State Park, named for the interesting rock formation first discovered and named by Captain Clark. This park provides opportunities for day-use recreation activities such as picnicking, swimming, boating, water skiing, and fishing.
The Bureau of Reclamation has constructed Cold Springs Reservoir, located about four miles south of the Columbia River near Hermiston, Oregon. The 1,500 acre reservoir and surrounding shorelands are administered by the Bureau of Sport Fisheries and Wildlife as the Cold Springs National Wildlife Refuge. Recreation activities at the reservoir include picnicking, fishing, and nature study.
Toll bridges cross the river just below The Dalles and McNary Dams. A new toll bridge was recently completed a few miles below the John Day Dam site. These three bridges and one ferry are the only crossings in this 120-mile section of the river.
There are no public campgrounds along this section of the Trail and the day-use facilities on McNary and The Dalles reservoirs are not sufficient to provide for present needs. Fulfillment of future needs along this section requires extensive recreation development at all three reservoirs.
THE DALLES TO SANDY RIVER
This section of the Trail takes in about 70 miles of the Expedition's route down the Columbia River and return. Sandy River marks the western edge of the Columbia River Gorge area, which is the channel cut by the Columbia through the Cascade Mountains. The Dalles is located near the Gorge's eastern boundary. This stretch of the Columbia is considered by many to be one of the most outstanding scenic attractions in the Pacific Northwest. The many waterfalls which are found along the precipitous, timber-clad cliffs of the Gorge add to its beauty and are an additional tourist attraction. Multnomah Falls, 620 feet high, is the largest and most famous of these falls.
The upper two-thirds of the section has been inundated by the waters backed up by Bonneville Dam, completed by the Corps of Engineers in 1943. The dam is located at the site of the first series of rapids encountered on the river when proceeding upstream, and is at the extreme upper end of tidal water on the Columbia, some 145 miles from the Pacific Ocean.
The 20,600-acre reservoir formed by Bonneville Dam extends 47 miles upstream to The Dalles Dam. Recreation development opportunities on this reservoir have been limited by such factors as unfavorable terrain and lack of access. Here again U.S. Highway 30 and the railroad between the highway and the reservoir complicate the access problem. Consequently, recreation use of the reservoir is not as great as normally would be expected.
The 24-mile stretch of river downstream from Bonneville Dam to the mouth of Sandy River receives heavy recreation use. This can be attributed to easy access, to terrain that is more suitable for recreation developments along the river, and to the area's proximity to the Portland metropolitan area.
The Columbia River Scenic Highway, which runs through the Columbia Gorge from the Sandy River almost to Bonneville Dam, also affects the recreation use in this area. This highway is the remaining section of the first highway constructed through the Gorge between Portland and Hood River. An engineering wonder of its day, the old road climbs high on the sides of the Gorge and affords good scenic vistas of the Columbia River. Many of the State parks and Forest Service recreation developments in this area are located along the scenic highway.
The Corps of Engineers has developed facilities at Bonneville Dam for visitors who wish to view operations of the dam and its fish ladders. Just below the dam at the Bonneville Salmon Hatchery, operated by the State Fish Commission, there are picnic facilities as well as opportunities for viewing hatchery operations. On the reservoir, the Corps has cooperated with the Port of Hood River in the development of a small boat basin at Hood River. Recreation developments at the basin include a boat ramp, protected moorage, and marine supply and repair facilities. The Corps presently is conducting a study to determine the feasibility of constructing a small boat basin at Cascade Locks in cooperation with the Port of Cascade Locks.
Mt. Hood National Forest borders the river and reservoir for nearly 30 miles in this section. These National Forest lands, in what is defined by the Forest Service as the Landscape Management Area (the area between the Columbia River and the Columbia Gorge breaks) are managed primarily for their recreation values. These include scenic, historic, archeologic, geologic, and botanic values, as well as recreation activities such as camping, picnicking, hunting, fishing, travel, and general sightseeing. A Secretary of Agriculture land classification order of 1915 designated that the lands adjacent to the Columbia River be known as the Columbia Park Division, to be managed for recreation purposes and coordinated with the purposes for which the National Forest was created.
The Forest Service modifies development sites at such recreation attractions as waterfalls and campgrounds if needed to accommodate large numbers of people. National Forest lands immediately adjacent to these occupancy sites, to bodies of water, and to routes of travel are managed so as to provide a pleasing forest environment or recreation zone for the forest traveler and visitor. The background scenery or primary foreground behind the recreation zones is managed so as to present an undisturbed and natural appearance. The Forest Service is preparing a comprehensive plan of management of the recreation resources in the whole Columbia Gorge, to be known as the Columbia Gorge Recreation Area Plan, which will spell out the details of management to achieve the recreation objectives.
The Forest Service has established five recreation development sites in this section, including two campgrounds, two picnic areas, and one concessioner operated lodge. There were no Lewis and Clark camp sites on these National Forest lands. The one historic area that has been established by the Service relates to an early wagon road in this area.
Seven potential recreation development sites have been identified on National Forest lands. Other types of potential areas identified include those which are particularly wild or scenic, or especially suited to mountain climbing, or to archeologic study. The Service also is studying the possibility of a scenic road which would run along the top of the bluff through the Gorge and would afford good vistas of the river and the route used by the Expedition.
The Bureau of Land Management administers some public domain lands in this section which might have potential for recreation. Except for a small tract which is proposed for addition to an existing State park, however, the recreation potential of these lands has not been fully evaluated.
In addition to the previously described hatchery below Bonneville Dam, there are two other State Fish Commission salmon hatcheries located in this section. Both the Oxbow and Cascade Hatcheries are located near Cascade Locks on tributaries to the Columbia. Visitors can observe hatchery operations but there are no other recreation facilities similar to those at the Bonneville Hatchery. Nearby Forest Service picnic areas, however, provide such facilities.
The State Game Commission operates the White River Game Management Area, located near Tygh Valley, south of The Dalles. The Game Commission also operates a fish hatchery in this section, located at Dee, south of Hood River.
The Columbia River below Bonneville Dam is a popular area for steelhead, salmon, and sturgeon sport fishermen. Sandy River has been well known for years because of its smelt fishery, but for some unknown reason smelt have not entered Sandy River for the past seven years even though they were in the Columbia.
The Oregon State Parks and Recreation Division has developed an excellent group of State parks in this section. Lewis and Clark State Park, located on the east bank of Sandy River near the U.S. Highway 30 (Interstate 80N) bridge, contains a historical marker relating to the Expedition. Although the Expedition did not camp here, they did explore the Sandy River area. Rooster Rock State Park, located on the Columbia about 10 miles upstream from the mouth of the Sandy, is a very popular day-use recreation area because of its swimming beach. Lewis and Clark camped at Rooster Rock on November 2, 1805. A historical marker near the park commemorates this event.
There are 22 other existing and two potential State parks in this section. Although some of the existing parks are not developed, they all offer outdoor recreation opportunities and most have facilities for a wide variety of activities, including camping, picnicking, hiking, swimming, boating, fishing, sightseeing, and nature study. Seven of the State parks are along the scenic highway in the Gorge. Six of the parks have frontage on the Columbia River or Bonneville Reservoir but in three of these areas access from the developed portion of the park to the water is cut off by a highway or railroad or both.
The only county park in this section is Oxbow County Park, administered by Multnomah County and located on the Sandy River a few miles above its mouth.
The section of the Trail from The Dalles to Sandy River coincides closely with the area of responsibility of the Columbia Gorge Commission, a three-member State Commission created by the Oregon Legislature in 1953. This Commission has been very successful in its program of bringing into public ownership those Gorge lands which have outstanding recreation, scenic, and historic values. A sum of $100,000 has been made available by the State Highway Commission for purchase by the State of lands in the Gorge which are recommended by the Gorge Commission and approved by the Highway Commission; approximately $80,000 has been spent. Since interest was first aroused in a "Save the Gorge" movement in 1951, some 3,170 acres have been acquired by public agencies through purchase, donation, and exchange.
Subsequent to the establishment of the Columbia Gorge Commission, two events took place in which the Commission's efforts were significant. One was the zoning of that part of the Gorge within Multnomah County against indiscriminate commercial and industrial developments. The other was the establishment of the Columbia River Highway, from Celilo west to Sandy River, as a Scenic Area by the State Scenic Area Commission. Along this highway, except at certain exempted locations, no new billboards may be constructed, and existing ones must be removed by July 19, 1969.
There are only two bridges across the Columbia River in this 70 mile section. One is at Hood River and the other at Cascade Locks. The latter is named Bridge of the Gods because it is located at the site where local Indian legends say a land bridge across the Columbia once existed.
The existing and planned recreation areas are not sufficient to provide for future needs even though this section now is highly developed. This situation is caused by the heavy demand resulting from the scenic attractions on the area, the nearby population pressures, and the highways, which add tourist use to local pressure. Additional areas are needed, especially those with usable frontage on the river or reservoir.
Better access to the Expedition's river route is needed. This problem is complicated by the limited access features of the Interstate Highway being constructed through here and by the difficulties connected with developing crossing sites over the railroad which usually lies between the highway and the water.
SANDY RIVER TO FORT CLATSOP
The section from Sandy River to Fort Clatsop includes the final 120-mile portion of the Expedition's travels on the Columbia River, site of the winter camp at Fort Clatsop, and the area of exploration along the Oregon coast south to the vicinity of present-day Cannon Beach.
There are no dams or reservoirs along this heavily populated stretch of the Columbia River. In contrast to the Trail section immediately upstream, only a small percentage of the land bordering the river is under State or Federal administration.
The National Park Service administers Fort Clatsop National Memorial, located at the site of the Expedition's camp during the winter of 1805-1806. This 125-acre area, which was authorized by Congress in 1958, contains a replica of the fort. The replica, which was constructed in 1955 through the community-wide efforts of individuals, organizations, and commercial firms in Clatsop County, was a feature of the celebration of the sesquicentennial of the Lewis and Clark Expedition. It was based on floor plan dimensions and other descriptions recorded in the Expedition's journals. At that time, the fort site was owned and administered by the Oregon Historical Society which had purchased the area in 1899 and administered it until its establishment as a national memorial. A new visitor center contains a museum with exhibits that tell the story of the Expedition and describe the adventures and experiences that occurred near the fort.
The Corps of Engineers has cooperated in the development of three facilities which contribute to recreation boating use of the Columbia. One is the Oregon Slough Entrance Channel near Portland, which is administered by the Portland Yacht Club. The other two are both small boat basins, one at Astoria and the other at Warrenton, administered by the City of Warrenton.
The State Parks and Recreation Division administers five park areas and has identified one potential area in this section. One of the existing areas, Ecola State Park, includes Tillamook Head, across which Captain Clark led a small party, including Sacagawea, on the trip to see a whale stranded on the beach south of there near present-day Cannon Beach. Gearhart Ocean Wayside is located on the ocean beach between Seaside and Gearhart, not far from the site of the salt cairn. Fort Stevens is situated near the ocean west of Fort Clatsop and Branley Wayside is alongside U.S. Highway 30 on a high bluff overlooking the Columbia River. Saddle Mountain State Park includes Saddle Mountain, a local landmark clearly visible from the site of Fort Clatsop.
The State Game Commission administers three game management areas along this section of the Trail. Two of the areas are on Columbia River islands which were related directly to experiences of the Expedition. Government Island, about three miles downstream from the mouth of the Sandy River, was the site of an overnight camp on November 3, 1805; almost all of this island is now owned by the Oregon Game Commission. This island, and two smaller ones nearby that are also owned by the Commission, are used for waterfowl management purposes. Public hunting and fishing are permitted on these islands, accessible only by boat. They serve also as popular bases for boaters, water skiers, picnickers, and campers.
Sauvie Island Game Management Area occupies a major portion of Sauvie Island, located at the confluence of the Willamette and Columbia Rivers. The Expedition made a lunch stop on the north side of the island on November 4, 1805, and Clark walked along the island for about three miles, but no one in the group realized the existence of the mouth of the Willamette River, hidden by this and other nearby islands. The Oregon Game Commission owns much of this large island and administers its holdings as a waterfowl management area and as public fishing grounds. Other recreation activities include boating and picnicking. The island is readily accessible from U.S. Highway 30 by a bridge across Willamette Slough.
The Fort Stevens Game Management Area, which adjoins Fort Stevens State Park, is used primarily for habitat development for game production. Recreation uses include public hunting, hiking, and picnicking. Its 1,466 acres of sand dunes along the ocean just south of the mouth of the Columbia also provide an access route through which fishermen may reach the river's south jetty.
There are two fish hatcheries close to the Trail in this section, both in Clatsop County. One is the State Game Commission's Gnat Creek Hatchery located near Westport where salmon and steelhead are produced to stock lower Columbia tributaries. The other is the State Fish Commission's Big Creek Salmon Hatchery near Knappa, producing salmon to assist in maintaining Columbia River runs.
There are three county park areas in varying stages of development along this Trail section. Blue Lake County Park, administered by Multnomah County, is a few miles east of Portland, between U.S. Highway 30 and the river. The highly developed park, close to Portland, receives very heavy recreation use, primarily because of its swimming area. Other recreation activities include picnicking, fishing, and boating.
The other two county parks are located in Clatsop County, one at the mouth of the John Day River with access to the Columbia River, and the other at Cullaby Lake, one mile off of U.S. Highway 101 between Astoria and Seaside. Both parks, in the process of being developed by the Clatsop County Park Department, will offer opportunities for picnicking, boating, fishing, and swimming. In addition, Cullaby Lake will provide for camping use.
There are several historical markers, monuments, and museums along this section of the Trail. In Astoria, Oregon, settled in 1811 by the American Fur Company, there are two museums, the Clatsop County Historical Museum, and the Columbia River Maritime Museum. The former contains pictures, references, and some material relating to the Expedition. Also located in Astoria, on a hill overlooking the city and the mouth of the Columbia River, is the Astoria Column. This 125-foot high column, with a viewpoint at the top reached by an inside stairway, features a frieze around the outside on which are depicted lower Columbia River historic events, including scenes of the Lewis and Clark Expedition. In the same park with the Column is a replica of an Indian canoe which is a memorial to a local Indian, Chief Comcomly, who was known to Lewis and Clark. The site of Fort Astoria, established in 1811, is within the city of Astoria. One of the bastions of the Fort has been rebuilt on the site and the area has been designated a National Historic Landmark.
In Seaside, at a location owned and administered by the Oregon Historical Society, local civic groups have erected a monument and constructed a replica showing how the salt cairn might have been operated by the Expedition. In Portland, a statue of Sacagawea and her infant son stands in Washington Park.
The State has erected four historic signs relating to the Expedition along highways in this section. One sign, on U.S. Highway 101 south of Astoria, describes Fort Clatsop, located less than a mile from that point. Another, on U.S. 101 near Cannon Beach, describes the trip across Tillamook Head by Clark and others to see the whale. The other two signs are located along U.S. Highway 30 west of Portland. One relates to Sauvie Island and the other to Deer Island, an Expedition campsite in March 1806. At Fort Stevens State Park, the State has erected a large sign which calls attention to nearby Fort Clatsop and refers to the Lewis and Clark Expedition.
The road access situation in this section is similar to the other two sections in Oregon except that Interstate Highways will have less effect. Interstate 80N from the east presently ends at Portland, and Interstate 5 from the south will cross the river at Portland on its way to Seattle. From Portland to Astoria, U.S. Highway 30 runs along the river. U.S. Highway 101, the coast highway, follows the route of the Expedition's explorations south along the coast as far as Cannon Beach, passing close to Fort Clatsop and to the salt cairn at Seattle. A railroad follows the Columbia in this section, too, running between the highway and the river.
There are four crossings of the Columbia River in this 120-mile section. Two are toll bridgesone across the river between Portland, Oregon, and Vancouver, Washington, and the other between Rainier, Oregon, and Longview, Washington. A toll ferry bridge connects Westport, Oregon, and Cathlamet, Washington, and a toll ferry plies between Astoria, Oregon, and Medler, Washington. When the bridge now under construction replaces the ferry at the mouth of the river, it is expected to result in a considerable increase in recreation travel along the coast.
Additional recreation areas are needed in this section, especially along the river between Portland and Astoria. The bridge at the mouth of the river will result in a greatly increased demand for recreation areas in that region, particularly along the coast. Access to the river and to the ocean beaches needs improvement. The Trail across Tillamook Head should be improved to permit greater use of this historic route. Similarly, the Trail from Fort Clatsop to the salt cairn at Seaside needs to be established so as to permit the following historic route.
Specific data necessary to make accurate projections of demands for recreation facilities along the Lewis and Clark Trail are not yet available. To a great extent, the demand for areas would depend on the interest aroused in the Trail and on the quality of effort made to identify, mark, and develop areas along the route. Since it is almost impossible to forecast such a demand, reliance must be placed on generalities and on population projections and travel trends for the State as a whole.
In 1960 the Bureau of the Census reported 1,768,687 inhabitants in the State of Oregon, an increase of nearly a quarter of a million, or 16.3 percent, over the 1950 census. Population projections indicate that the number of inhabitants will have risen to 4,013,000 by the year 2000.
Portland, with a population in 1960 of 372,676, is the largest city along the Trail in the Pacific Northwest. Other principal cities along the Expedition's route in Oregon, together with their 1960 population figures, are The Dalles, 10,493; Hood River, 3,657; St. Helens, 5,022; and Astoria 11,239.
Oregon has a great variety of scenic attractions ranging from ocean beaches to mountain peaks. The annual influx of tourists from the eastern states and from California to the south is sizable and is growing steadily each succeeding year. A very high percentage of the tourists from the east travel along the Columbia River, a natural route through the Cascades in use long before Lewis and Clark discovered its practicality. Tourists from the south primarily use two routes when passing through Oregoneither down the Willamette Valley to Portland and across the Columbia River bridge to Vancouver, or else along the coast to Astoria and across by ferry to the Washington shore. It is evident that the majority of tourists coming to or through Oregon will follow or cross the Lewis and Clark Trail. Consequently, the demand for recreation facilities along the Trail will increase each year as highway traffic increases.
One of Oregon's Interstate Highways, 80N, follows the route of the Expedition down the Columbia River from Boardman west to Portland, a distance of about 165 miles. Interstate 5, which comes north from California to Portland, crosses the Columbia River and the route of the Expedition, on the bridge between Portland, Oregon, and Vancouver, Washington. Several sections of both of these Interstate Highway routes have been constructed. The third highway, Interstate 82, will begin at junction with 80N in Pendleton, Oregon, and then head northwest for Seattle, Washington, crossing the Columbia River, and the Expedition route, just downstream from McNary Dam. These Interstate Highways will carry a high percentage of the tourist traffic to and through Oregon. Because of the highspeed traffic and limited access features of these highways, however, they can not be considered as recreation roads.
In addition to the Interstate Highways, several Federal highways follow or intersect the Lewis and Clark Trail. U.S. 101 follows the Oregon coast from the California border north to Astoria. A bridge now under construction will replace the ferry which now carries U.S. 101 traffic across the mouth of the Columbia River. This new bridge, which is to be named in honor of the leaders of the Expedition, undoubtedly will increase highway tourist traffic in this locality and thus will also increase demand for recreation areas and facilities along the Trail near the mouth of the Columbia.
U.S. 101 goes through Cannon Beach, near the site of the whale incident, and through Seaside, where the salt cairn was located. It also passes within a half mile of the site of Fort Clatsop.
When Interstate 80N is completed, it will replace U.S. Highway 30 along the Columbia River between Boardman and Portland, but U.S. 30 will continue its present route downstream from Portland to Astoria, a distance of over 100 miles. Upstream from Boardman, U.S. Highway 730 follows the river for nearly 40 miles to the Washington-Oregon border.
The highways along the Columbia River in Oregon share with a railroad the limited space near the river. For most of this distance the railroad lies between the highway and the river, further complicating the problem of access to the river, or reservoirs, for recreation purposes.
East of Portland in the Columbia Gorge is a section of the first highway constructed along the Columbia River in Oregon between Portland and Hood River. The stretch between Troutdale and Dodson is a State scenic highway, affording a welcome change from the high speed, limited access highway which replaced it.
The principal problems affecting the recreation use of the Lewis and Clark Trail in Oregon are related to access. Although highways follow the Expedition route down the Columbia River to its mouth, access from these highways to the river is complicated by two factors. One is the existence of a railroad, usually between the highway and the river, which not only makes access a problem but also occupies much of the limited area suitable for recreation development along the river.
The other factor is the limited access features of these highways. This is a particular problem in the case of Interstate Highway 80N, which follows the river route from Boardman to Portland, a distance of approximately 165 miles. Because of the highspeed traffic on the Interstate Highways and the safety features required, access to and from these highways is especially restricted.
The problem of access is difficult to solve in this instance since it involves more than merely acquiring land. For example, it is not always practical to cross over a railroad to reach to a recreation area because of factors, such as safety and construction costs, that are involved. In some cases it might be more practical to enlarge existing areas that have access by acquiring adjoining land rather than attempt to obtain a new access point. In other instances the development of an area with frontage on the river or reservoir may be accomplished through provision of boat rather than road access.
Access to and from the Interstate and other limited access highways cannot always be obtained where it is needed because of safety factors and the high construction costs of overpasses, and acceleration and deceleration lanes. Consequently, full utilization must be made of existing access points by expanding existing areas where possible, and by the construction of service roads from a single access point to several nearby recreation sites when terrain conditions permit.
A basic development program for the historic, wildlife, and recreation resources along the Lewis and Clark Expedition route appears in the recommended program, page 20. The recommended routing of a Lewis and Clark Trail Highway in Oregon is indicated on maps 21-23. Specific recommendations relating only to Oregon follow:
1. The Corps of Engineers should work with State and local agencies to improve access to the three existing reservoirs on the Columbia River in Oregon and to provide additional recreation facilities. The planning for recreation developments and access on the John Day Dam and reservoir, now under construction, should take into account the increased demand for recreation facilities that may be expected as a result of the interest aroused in following and exploring the Lewis and Clark Trail.
2. The Bureau of Sport Fisheries and Wildlife should proceed with its planning for the proposed John Day Waterfowl Management Area, including the acquisition of needed additional lands, in order to provide waterfowl benefits and improve waterfowl hunting and other recreation opportunities.
3. The Bureau of Land Management should analyze the public domain lands along or near the Lewis and Clark Trail in order to determine the recreation potential of such lands and to provide for appropriate and timely development of recreation facilities on those lands that are suitable.
4. The Forest Service should continue its recreation planning, development, and administration program on Mt. Hood National Forest lands along the Lewis and Clark Trail, and when practicable, give priority to additional developments to help meet the present and anticipated demands on the Trail. It is further recommended that the Service give emphasis to its planning on the proposed scenic road along the top edge of the Columbia Gorge between Larch Mountain and Hood River, Oregon. This road would provide a scenic alternate to the Interstate Highway now being constructed along the floor of the Columbia River Gorge.
5. The State Parks and Recreation Division should proceed with its plans for an interpretive program related to the Lewis and Clark Trail at Ecola, Lewis and Clark, and Fort Stevens State Parks. It is further recommended that the Division improve and expand existing State recreation facilities, and acquire and develop additional recreation areas and access to the river and ocean along the trail in Oregon.
6. The Columbia River Gorge Commission should continue its program of preserving, developing, and protecting the recreation, scenic, and historic areas of the Columbia River Gorge with special emphasis on acquiring areas that provide access to the river or expand existing areas along the river.
7. The Oregon State Historical Society should provide the leadership in identifying and marking the Lewis and Clark Expedition campsites and other historic sites related to the Expedition in Oregon that have not yet been properly identified and marked.
8. The already existing Lewis and Clark Trail Advisory Committee should form the nucleus of the State Lewis and Clark Trail Committee and should undertake the development of an educational program for the Lewis and Clark Trail in Oregon.
Last Updated: 11-Jun-2012 | http://www.nps.gov/history/history/online_books/lewis-and-clark-trail/states.htm | 13 |
54 | The idea is very simple. The accelerometer and the compass both measure vectors fixed to the Earth. These vectors are different - gravity and ambient magnetic field - but we hope that their angle is constant because they are both fixed to the same coordinate system. But while the accelerometer is sensitive to the motion acceleration, the compass is not. If we manage to rotate and scale the magnetic field vector into the gravity vector, we have a reference while the accelerometer is subject to motion acceleration and we can extract the motion acceleration. Plus, we have a reliable gravity vector and an "in motion" indication.
So the process is the following:
- Calibrate the compass to the location and figure out the offsets we discussed in the limitation part. Also measure the reference gravity vector length.
- If we find that the acceleration vector's length is close enough
to the reference gravity vector length, we assume that this means "no
motion" (see the discussion of this issue here). Find then the rotation axis and angle to rotate the compass
vector into the acceleration vector. We call these values reference
rotation axis, reference rotation angle, reference magnetic vector and
reference acceleration vector, respectively. Keep updating these values
as long as we are in "no motion" state.
- If we find, that the acceleration vector's length is significantly shorter or longer than the reference gravity vector length, we are in "motion" state. Then we use the previously recorded reference values and the actual magnetic field value measured by the compass to calculate a simulated gravity vector. Using the measured acceleration vector and the simulated gravity vector, we extract the motion acceleration vector.
The steps are:
- Calculate the rotation operation that rotates the reference
magnetic vector to the reference acceleration vector. The rotation
operation has two components: an axis around which one vector is
rotated to the other and a rotation angle. The rotation axis is the vector cross product of the two vectors. The angle can be obtained by resolving the equation of two different representations of the two vectors' dot product. This yields reference rotation axis and reference rotation angle.
- The reference rotation axis nicely rotates the reference magnetic vector to the reference acceleration vector but we need to apply the rotation to the current magnetic vector. We need a new rotation axis for this and we obtain it by rotating the reference rotation axis by the rotation operation that rotates reference magnetic vector to the current magnetic vector. This yields a current rotation axis.
- Then we simply rotate the current magnetic vector around the current rotation axis with the reference rotation angle. This yields a simulated gravity vector. We also have to scale this vector so that its length is equal to the reference gravity.
Let's see the example Android program then that implements all this!
Click here to read the post further. | http://mylifewithandroid.blogspot.com/2012/03/example-application-for.html | 13 |
86 | Early Quantum Theory:
As we know, an atom has protons and neutrons at its nucleus and electrons spinning around it. The atomic space is extremely empty. In other words, the sizes of electrons, protons, and neutrons are much smaller than the size of the atom itself. If the surface of a basketball is the place where electrons of needle-tip size are spinning over, protons and neutrons of almost the same size are at the nucleus. The ratio of the radius of the atom (the radius of the basketball) to the radius of electron, proton, or neutron (a needle tip) is of the order of (10,000 to 100,000) closer to 100,000.
The motion of electrons around the atom is associated with K.E. If (v) is the average speed of an electron as it spins around the nucleus at a certain average radius (r), its K.E. = ½ Mv2. An electron, being a negative charge, is also in the electric field of the positive nucleus. The P.E. of a charge q1 in the field of another charge q2, is Ue = -kq1q2 / r, where k = 8.99X109 Nm2/C2 is the Coulomb’s constant. For the proton and electron of a hydrogen atom Ue = -ke2 / r.
It is possible to determine the where-about of the electrons around the nucleus of atoms and also the way they are oriented by solving different case-problems in an energy balancing way. This way of determining the size and shape of an “orbital” (space around the nucleus of an atom where electrons can be traced) is the basis for quantum mechanics calculations. Here, we are going to discuss the hydrogen atom, the simplest one.
Calculation of the Radius of the Hydrogen Atom,
The Bohr Model
It can be experimentally verified that it takes 13.6 eV of energy to remove the electron from a hydrogen atom when it is in its ground state (closest to the nucleus). If the electron is somehow moved to higher shells, then it takes less energy to remove it from its atom. This means that if the electron of hydrogen atom is in its ground state, it takes at least 13.6eV to detach it from its atom. The electron energy is the sum of its K.E. and P.E.. We may write:
P.E. + K.E. = -13.6 eV, or
- k e2 / r + ½ Mv2 = -13.6 eV, (1)
where v2 can be found by understanding that the Coulombic force between the proton and electron of the hydrogen atom provides the necessary centripetal force for the electron spin around the proton.
Equating the expression for these two forces yields:
ke2 / r2 = Mv2 / r.
Solving for v2 yields: v2 = (ke2) / (rM) (2)
Substituting (2) in (1) changes (1) to :
- k e2 / r + ½ M( ke2 / rM) = -13.6 eV (3)
M cancels and we get:
- ( ½) k e2 / r = -13.6 ( 1.6 X 10 –19 ) Joules.
Substituting for (k) and (e), yields the value for r of
r = 0.53 X 10 –10m. The diameter of H-atom is therefore
1.06 X 10 –10m.
Note that 1x10-10m is defined as 1 angstrom shown as 1Å
Here, we define another unit of length called “Angstrom” to be 10 -10 meter. This is 10 times smaller than one nanometer. For example the red and violet wavelengths that are 700nm and 400nm or, 7000 and 4000 Angstroms, written as 7000Å and 4000Å.
As electrons spin around the nuclei of atoms, they receive energy by many means. If an electron receives energy, its K.E. increases and therefore has to change its orbit and jump to an orbit of a greater radius. A higher energy level of electron corresponds to a greater radius for its rotation. The possible orbits are discrete positions and not continuous. The reason for this discreteness is the fact that an electron’s motion must fit into a path that is an integer multiple of a certain wavelength that is proportional to its energy. Look at the following figure where 3λ1 and 4λ2 are fit into orbits of radii r1 and r2.
The energy levels are discrete as well as the radii. When, for whatever reason, an electron jumps from a certain level to a higher energy level, its original energy level (orbit) is left vacant. That must be filled, not necessarily with the same electron, but with any other electron that can lose the correct amount of energy. When a more energized electron of the m-th level that has an energy of Em goes to a lower energy level En, its excess energy (Em –En ) appears as a photon of electromagnetic radiation that has an energy of (hf).
Em – En = hf
Where h = 4.14x10-15 eV-sec is the Planck’s constant. That is how light is generated.
When hydrogen, helium, or any other gas is under a high enough voltage, its electrons separate from the nuclei of its atoms and are pulled toward the positive pole of the external source, while the positive (ionized) nuclei move toward the negative pole. During this avalanche-like motion in opposite directions, many different recombinings and separations between the electrons and nuclei occur. These transitions from many different levels to other many different levels generate so many different (hf)s and (λ)s of different colors some of which fall in the visible range. The gas becomes hot because of these transitions. An enough hot gas emits light. The spectrum of a hot (excited) gas is called an “emission spectrum”. The same gas, when cold, absorbs all such emissions. A cold gas has “absorption spectrum”. As a demo, we are going to observe the emission spectrum of (H) and (He) gases in our physics lab. A high voltage source for gas excitation, low pressure (H-tube) and (He-tube), as well as a spectrometer are needed. The following equations give the radius and the energy corresponding to different layers of atoms, of course atoms that are singly ionized only.
rn = (0.53x10-10m) n2 / Z
En = -(13.6 eV) Z2 / n2
For atoms that have more than one of their electrons around them, the calculations are more difficult and involved, and require higher levels of mathematics. In such calculations, the repulsion of electrons and their interactions must be taken into consideration.
Chapter 40 Test Yourself 1:
1) In most atoms, the ratio of the atomic radius to the electron radius is close to
(a) 100 (b) 10,000 (c) 100,000 (d) 10 click here.
2) The atomic size is the size of
(a) nucleus. (b) neutron. (c) space determined by the electronic cloud. (d) electron itself.
3) The electronic cloud in a hydrogen atom is caused by
(a) a mixture of a large number of electrons randomly flowing around the nucleus.
(b) a single electron spinning around the nucleus so fast that it appears everywhere.
(c) a fuzz of dust. (d) two electrons orbiting opposite to each other’s direction. click here.
4) The electronic cloud in a H2 molecule is caused by
(a) a mixture of a large number of electrons randomly flowing around the nuclei.
(b) two extremely-fast-moving electrons spinning around the nuclei in opposite directions repelling each other.
(c) a fuzz of dust. (d) an electron orbiting opposite to proton’s motion. click here.
5) The energy of an atom is the energy of its
(a) protons. (b) neutrons. (c) electrons with respect to its protons.
(d) the K.E. of its electrons plus the P.E. of its electrons with respect to its nucleus.
6) The formula for the P.E. of charge q1 with respect to charge q2 a distance r from it is
(a) - kq1q2/r2 (k is the Coulomb’s constant.) (b) - kq1q2/r. (c) - kq1/r2. (d) - kq2/r2.
7) If v is the average speed of electron, then its K.E. is
(a) Mev where Me is the electron mass. (b) Mev2. (c) ½ Mev2. (d) Megh. click here.
8) 13.6 eV is the
(a) minimum energy for ionization of a hydrogen atom from its ground state.
(b) maximum energy for ionization of a hydrogen atom from its ground state.
(c) average energy for ionization of a hydrogen atom.
(d) average energy for ionization of all hydrogen atoms in a narrow tube. click here.
9) The P.E. of the electron in a hydrogen atom is
(a) -ke2/r2 (k is the Coulomb’s constant.). (b) -ke2/r. (c) - k/r2.. (d) - k2/r2.
10) Using the Coulomb force (F = ke2/r2) between the electron and the proton of a hydrogen atom as the centripetal force for its electron rotation, we may write:
(a) ke2/r2 = Mv2/r. (b) ke2/r2 = Mv/r. (c) ke2/r = Mv2/r. (d) ke2/r2 = Mgh.
11) The diameter of hydrogen atom is approximately
(a) 1 nm. (b) 10nm. (c) 0.1nm. (d) 0.2nm. click here.
12) The radius of hydrogen atom is approximately
(a) 1 Å. (b) 10 Å. (c) 0.1 Å. (d) 0.53 Å.
13) The wavelength of red light is
(a) 40 nm. (b) 400nm (c) 4000nm.. (d) 700nm
14) The wavelength of violet light is
(a) 70 Å. (b) 700 Å. (c) 7000 Å. (d) 4000 Å. click here.
15) An electron around the nucleus of an atom orbits at
(a) a fixed radius that never changes.
(b) a variable radius that can have any value.
(c) a variable radius that can have certain discrete values.
16) The radius at which an electron orbits the nucleus of an atom must be such that
(a) the P.E. of the electron equals its K.E.
(b) the P.E. of the electron equals ½ its K.E.
(c) ½ the P.E. of the electron equals its K.E.
(d) an integer number of a certain wavelength fits in its wavy path of motion. click here.
17) The reason for discreteness of possible positions (radii) for the electron orbit around atoms is that
(a) electrons have wavy motion as they orbit the nucleus and the wavelength of their wavy motion must fit a whole number of times in their circular path.
(b) electron position is fixed. (c) electrons repel each other. (d) P.E. is discrete by itself.
18) The discreteness of electron energy (energy levels) is because of
(a) the discreteness of electronic levels (radii of orbitting) click here.
(b) the discreteness of P.E.. (c) the discreteness of K.E.. (d) equalization of P.E. and K.E. around atoms.
19) Light is generated when
(a) a higher energy electron fills a vacant energy level and loses some energy.
(b) an electron is energized and moves to a higher possible orbit.
(c) an electron stops moving. (d) none of the above. click here.
20) The color of an emitted photon of light depends on
(a) the energy difference between the energy of an excited electron and the energy of the level it fills up.
(b) its frequency of occurrence. (c) Its wavelength. (d) all of the above. click here.
Particles and Waves:
We have so far discussed two behaviors of light: straight line motion ( Geometric Optics) and the wave-like behavior and interference (Wave Optics). In this chapter, the particle-like behavior of light will be discussed. In fact, the particle-like behavior is also associated with a frequency and it cannot be separated form the wave-like behavior.
Max Planck formulated this theory that as electrons orbit the nucleus of an atom, they receive energy from the surroundings in different forms. Typical forms are: heat waves, light waves, and collision with other electrons and particles. The radius at which an electron orbits is a function of electron's K.E. and therefore electron's speed. Recall K.E. = (1/2)Mv2. Each electron is also under a Coulomb attraction force from the nucleus given by F = ke2 / r2. Furthermore, circular motion requires a centripetal force Fc = Mv2/r. We know that it is the Coulomb force F that provides the necessary centripetal force Fc for electron spin.
The above discussion clarifies that, in simplest explanation, each electron takes a certain radius of rotation depending on its energy or speed. When an electron receives extra energy, it then has to change its orbit or radius of rotation. It has to take an orbit of greater radius. The radius it takes is not just any radius. When such transition occurs, a vacant orbit is left behind that must be filled. It may be filled by the same electron or any other one. The electron that fills that vacant orbit must have the correct energy that matches the energy of that orbit. The electron that fills that orbit may have excess energy that has to be given off before being able to fill that vacant orbit. The excess energy that an electron gives off appears as a burst of energy, a parcel of energy, a packet of energy or a quantum of energy according to Max Planck.
The excess energy is simply the energy difference between two different orbits. If an electron returns from a greater radius orbit Rm with an energy level Em to a smaller radius orbit Rn with an energy level En, it releases a quantum of energy equal to the energy difference Em - En. Planck showed that this energy difference is proportional to the frequency of occurrence ( f ) of the released quantum or the packet of energy. The proportionality constant is h with a value of h = 6.626x10-34 J.s called the " Planck's constant." The packet or quantum of energy is also called a "photon."
In electron-volts, ( h ) has a value of h = 4.14x10-15eV-sec. The Plancks' formula is:
Em - En = hf or, ΔE = hf
Example 1: Calculate (a) the energy of photons whose frequency of occurrence is 3.2x1014 Hz. (b) Find their corresponding wavelength and (c) express if they are in the visible range.
Solution: (a) ΔE = hf ; ΔE = ( 6.626x10-34 J.s )( 3.2x1014 /s) = 2.12x10-19 J
Note that 1eV = 1.6x10 -19 J . Our answer is a little more than 1eV. In fact it is (2.12 /1.6) = 1.3 eV.
(b) c = f λ ; λ = c / f ; λ = (3.00x108m/s)/ (3.2x1014/s) = 9.4x10-7m = 940x10-9m = 940nm
(c) The visible range is between 400 nm - 700 nm; this is not in the range. It is infrared.
Example 2: Calculate (a) the energy ( in Joules) of each photon of ultraviolet light whose wavelength is 225nm. (b) Convert that energy to electron-volts.
Solution: (a) ΔE = hf = hc /λ ; ΔE = ( 6.626x10-34 J.s )(3.00x108m/s) / 225x10-9m = 8.83x10-19 J.
(b) Since 1eV = 1.6x10-19J; therefore, ΔE = 5.5eV.
The mechanism by which photoelectric effect operates may be used to verify the particle-like behavior of light. A photoelectric cell is made of a vacuum tube in which two metallic plates or poles are fixed. The two plates are connected to two wires that come out of the sealed glass tube and are used for connection to other electronic components. For time being, let us connect a photoelectric cell to just a galvanometer (sensitive ammeter) as shown in the figure below. One terminal (plate) in the tube may be mounted in a slanted way in order for the light coming from outside to effectively shine on it. This side forms the negative pole. The other side collects or receives electrons and forms the positive pole.
When photons of light are sent toward the slanted metal plate, it is observed that the galvanometer in the circuit shows the passage of a current. When the light is cut off, the current stops. This shows that the collision of photons of light on the metal surface must release electrons from the outer shells of the outermost atomic layers of the metal oxide coating.
Each energetic photon that collides with the metal surface, releases one electron. This released electron has some speed and therefore some K.E. = 1/2Mv2. The atoms of the outer surface that have lost electrons, replenish their electron deficiencies from the inner layer atoms of the metal oxide. This replenishing process transmits layer by layer through the wire and the galvanometer all the way to the other pole that makes it "Positive." The positive end then pulls the released electrons from the negative end through the vacuum tube and the circuit completes itself. This process occurs very fast. As soon as light hits the metal plate, the circuit is on. As soon as light is cut off, the circuit goes off.
The conclusion of the above experiment is that photons of light act as particles and kick electrons out of their orbit. This explains the particle-like behavior of light and a verification that energy is "quantized."
Photoelectric Effect Formula:
The energy necessary to just detach an electron out of a metal surface is called the " Work Function" of that metal and is shown by Wo . Since, according to Planck's formula. the energy of each incident photon on the metal surface is hf, and the kinetic energy of the released electron is K.E., we may write the following energy balance for a photoelectric cell.
hf = Wo + K.E..
According to this equation, hf must be greater than Wo for each electron to be released. Since h is a constant; therefore, f must be high enough for the photon to be effective. There is a limit for frequency below which nothing happens. That limit happens when the frequency of the incident photon is just enough to release an electron. Such released electron has a K.E. = 0. At the limiting frequency called the " threshold frequency ", the kinetic energy of the released electron is zero. Setting KE = 0 and replacing f by fth, we get:
h fth = Wo or fth = Wo / h.
The above formula gives the threshold frequency, fth .
Example 3: The work function of the metal plate in a photoelectric cell is 1.73eV. The wavelength of the incident photons is 366nm. Find (a) the frequency of the photons, (b) the K.E. of the released electrons, and (c) the threshold frequency and wavelength for this photoelectric cell.
Solution: (a) c = fλ ; f = c/λ= (3.00x108m/s) / (366x10-9m) = 8.20x1014 Hz
(b) hf = W0 + K.E. ; K.E. = hf - W0
K.E. = ( 4.14x10-15eV-s )( 8.20x1014 /s ) - 1.73eV = 1.66eV
(c) fth = Wo / h ; fth = 1.73eV / (4.14x10-15eV-s) = 4.18x1014 Hz
λth = c / fth ; λth = (3.00x108m/s ) / (4.18x1014 Hz) = 718nm
According to de Broglie, for every moving particle of momentum Mv, we may associate an equivalent wavelength, λ describing its wave motion behavior such that
de Broglie Wavelength
where λ is called the "de Broglie wavelength."
Example 4: Calculate (a) the DeBroglie wavelength associated with the motion of an electron that orbits a hydrogen atom at a speed of 6.56x106 m/s, and (b) compare the calculated wavelength with the diameter of hydrogen atom.
Solution: (a) Using λ = h/Mv, we may write: λ = (6.626x10-34 Js) / [(9.108x10-31kg)(6.56x106 m/s)] = 1.11x10-10m.
(b) The diameter of hydrogen atom is 1.06x10-10m, almost equal to the calculated deBroglie wavelength in Part (a). This shows why the electron of each hydrogen atom has a wavy motion around its nucleus. Note that for electron speeds higher than 6.56x106 m/s, given in this example, λ will be smaller, and a greater number of such λ's will fit in each circular path that electron may have around nucleus.
The Compton Effect:
The Compton effect is another boost to the idea that energy of a system is "quantized." The quantization of energy means that energy coming out of an atom is in discrete parcels or packets that are called quanta (the plural of quantum). As was mentioned in the photoelectric effect, each packet of energy or "photon" could release one electron in the form of a collision. Each photon carries a quantum of energy or hf amount of energy.
In Compton effect, also called "Compton Scattering", a high energy photon of wavelength λ collides with an electron causing the release of another photon that is less energetic (longer wavelength, λ' ). Of course, the electron will be dislocated and given a higher kinetic energy and a different momentum because of the collision with the incident photon. Since photons are treated as having a mass of zero and just carry a parcel or quantum of energy, they are massless compared to electrons, and therefore scatter. For every photon an equivalent mass may be calculated according to the Einstein mass-energy conversion formula ( E = Mc2 ).
The figure on the right shows a photon of wavelength λ that collides with an electron (in the general case of an oblique collision) causing the electron move sidewise through angle φ while the newly emitted photon λ' moves along angle θ.
The figure on the right shows a photon of wavelength λ that collides with an electron (in the general case of oblique collision) causing the electron move sidewise through angle φ while the newly emitted photon λ' moves along angle θ. It is easy to show that the change in wavelength λ' - λ is given by:
where mo is the rest mass of electron. The quantity h/(moc) = 0.00243 nm is called the Compton wavelength.
Classically, the target charge (here the electron) should oscillate at the received frequency and re-radiate at the same frequency. Compton found that the scattered radiation had two components, one at the original wavelength of 0.071nm, and the other at a longer wavelength that depended on the scattering angle θ and not on the material of the target.
This means that the collision of the incident photon ( λ ) must be with electrons that the experiment did not depend on the target material. Also since the energy of the incident photons were about 20keV, way above the work function of any material, the electrons were treated as if they were "free electrons."
To derive this formula, both conservation of energy and conservation of momentum must be applied.
Note that the momentum of a photon may be found from the de Broglie formula
λ = h/Mc or Mc = h/λ
The momentum of a photon is p = Mc where M is the mass equivalent of the photon energy; thus,
p = h/λ.
The energy and momentum balance equations are:
Example 5: Photons of wavelength 5.00 Å pass through a layer of thin zinc. Find the wavelength of the scattered photons for an scattering angle of 64.0 degrees.
Solution: λ' - λ = (h/moc) (1 - cosq ) = (0.00243 nm) ( 1- cos 64.0 ) = 0.00134 nm = 0.0134 Å
λ' = λ + 0.0134 Å = 5.013Å
As was mentioned earlier, when an electron in an atom receives some energy by any means, it moves to a greaterer radius orbit whose energy level fits that electron’s energy. Such atom is then said to be in an excited state. The excited state is unstable however, and the electron returns to lower levels by giving off its excess energy in the form of electromagnetic radiation ( visible light is a small part of the E&M waves spectrum). Max Planck showed that the frequency of occurrence ( f ) of a particular transition between two energy levels in an atom depends on the energy difference between those two layers.
En - Em = hf
In this formula En is the energy of the n-th level, Em the energy of the m-th level (lower than the n-th level) and h = 4.14x10-15 eV-sec is the Planck’s constant. f is the frequency of the released photon.
Possibilities for the occurrence of electron jump from one level to other levels are numerous. It depends on the amount of energy an electron receives. An electron can get energized when a photon hits it, or is passed by another more energetic electron that repels it, or by any other means. The electron return can occur in one step or many steps depending on the amount (s) of energy it loses. In each possibility, the red arrow shows electron going to a higher energy level, and the black arrows show possible return occurrences.
Hydrogen is the simplest atom. It has one proton and one electron. Click on the following applet for a better understanding of the transitions: http://www.colorado.edu/physics/2000/quantumzone/lines2.html . In this applet, if you click on a higher orbit than where the electron is orbiting, a wave signal must be received by the electron (from outside) to give it energy to go to that higher level. If the electron is already in a higher orbit and you click on a lower orbit, then the electron loses excess energy and gives off a wave signal before going to that lower orbit.
Also click on the following link: http://www.walter-fendt.de/ph14e/bohrh.htm and try both options of "Particle Mode" and "Wave Mode". You can put the mouse on the applet near or exactly on any circle and change the orbit of the electron to anywhere you wish; however, there are only discrete orbits whose each circumference is an integer multiple of a certain wavelength. It is at those special orbits that the applet shows principal quantum numbers for the electron on the right side.
For hydrogen atom, possible transitions from the ground state (E1) to 2nd state (E2), 3rd state (E3), and 4th state (E4) are shown in Fig. 1. The possibilities for electron return are also shown. The greater the energy difference between two states, the more energetic the released photon is when an excited electron returns to lower orbits. If the return is very energetic, the wavelength may be too short to fall in the visible range and cannot be seen in the spectroscope. Some transitions are weak and result in larger wavelengths in the infrared region that cannot be seen either. However some intermediate energy transitions fall in the visible range and can be seen
Grouping of the Transitions:
Transitions made from higher levels to the first orbit form the Lyman Series.
Transitions made from higher levels to the second orbit form the Balmer Series.
Transitions made from higher levels to the third orbit form the Paschen Series.
Transitions made from higher levels to the fourth orbit form the Pfund Series.
Emission and Absorption Spectra
A hot gas emits light because of the energy it receives by any means to stay hot. As was mentioned earlier, the received energy by an atom sends its electrons to higher levels, and in their returns, the electrons emit light at different wavelengths. The emitted wavelengths can be observed in a prism spectrometer in the form of a few lines of different colors. Each element has its own unique spectral lines that can be used as an ID for that element. Such spectrum coming from a hot gas is called emission spectrum. For a host gas spectral lines are discrete.
For white light entering a spectrometer the spectrum is a continuous band of rainbow colors. This continuous band of colors in a spectrometer ranges from violet to red and gives the following colors: violet, blue, green, yellow, orange, and red. Light emitted from the Sun contains so many different colors (or electronic transitions) that its spectrum gives variety of colors changing gradually from violet to red. It contains so many different violets, blues, greens, yellows, oranges, and reds that it appears continuous.
Chapter 40 Test Yourself 2:
1) The energy of a photon of light, according to Max Planck's formula is (a) E = 1/2Mv2. (b) E = hf. (c) E = Mgh.
2) The Planck's constant, h, is (a) 6.6262x10-34 J.sec. (b) 4.14x10-15 eV.sec. (c) both a & b. click here.
3) An electron orbiting the nucleus of an atom can be energized by (a) receiving a heat wave. (b) getting collided by another subatomic particle. (c) by getting hit by a photon. (d) both a, b, & c.
4) When an electron is energized by any means, it requires (a) a greater radius of rotation. (b) a smaller radius of rotation. (c) it stays in the same orbit but spins faster. click here.
5) When there is a vacant orbit, it will be filled with an electron from (a) a lower orbit. (b) a higher orbit.
6) A higher orbit means (a) a greater radius. (b) a faster moving electron. (c) a greater energy. (d) a, b, and c.
7) The excess energy an electron in a higher orbit has is released in the form of a photon (small packet or burst of energy) as the electron fills up a lower orbit. (a) True (b) False click here.
8) The excess energy is (a) the energy difference, E2 - E1, of the higher and lower orbits. (b) the energy each electron has anyway. (c) both a & b.
9) A photon has a mass of (a) zero. (b) 1/2 of the mass of an electron. (c) neither a nor b..
10) Each photon carries a certain amount of energy. We may use the Einstein formula (E = Mc2) and calculate an equivalent mass for a photon. (a) True (b) False click here.
11) The greater the energy of a photon (a) the higher its speed. (b) the higher its velocity. (c) the higher it frequency. (d) a, b, c, & d.
12) The greater the energy of a photon the lower its wavelength. (a) True (b) False
13) The formula for waves speed, v = fλ, takes the form of (a) c = fλ for photons of visible light only. (b) for photons of non-visible light only. (c) for the full spectrum of E&M waves which visible light is a part of. click here.
Problem: A student has calculated a frequency of 4.8x1016 Hz for a certain type of X-ray and a wavelength of 7.0nm.
14) Use the equation v = fλ and calculate v to see if the student's calculations is correct. (a) Correct (b) Wrong
15) The answer to Question 14 is (a) 3.36x108 m/s. (b) 3.36x1017 m/s. (c) neither a nor b. click here.
16) The reason why the answer to Question 14 is wrong is that v turns out to be greater than the speed of light in vacuum that is 3.0x108 m/s. (a) True (b) False
17) In the photoelectric effect, (a) electrons collide and release photons. (b) photons collide and release electrons. neither a nor b. click here.
18) In a photoelectric cell, the plate that receives photons, becomes (a) negative. (b) positive. (c) neutral.
19) The reason why the released (energized) electrons do no return back to their shells is that (a) their energies are more than enough for the orbits they were in. (b) the orbits (of the atoms of the metal plate) that have lost electrons, quickly replenish electrons from the inner layer atoms of the metal plate. (c) the outer shells that have lost electrons will be left in loss for ever. (d) a & b. click here.
20) When light is incident on the metal plate of a photoelectric cell, the other pole of the cell becomes positive. The reason is that (a) photons carry negative charges. (b) the other pole loses electrons to replenish the lost electrons of the metal plate through the outside wire that connects it to the metal plate. (c) both a & b.
21) In a photoelectric cell, the released electrons (from the metal plate as a result of incident photons), (a) vanish in the vacuum of the cell. (b) accelerate toward the other pole because of the other pole being positive. (c) neither a nor b.
22) The negative current in the external wire of a photoelectric cell is (a) zero. (b) from the metal plate. (c) toward the negative plate. click here.
23) In a photoelectric cell, the energy of an incident photon is (a) 1/2Mv2. (b) hf. (c) Wo.
24) In a photoelectric cell, the work function of the metal plate is named (a) 1/2Mv2. (b) hf. (c) Wo.
25) In a photoelectric cell, the energy of each released electron is (a) 1/2Mv2. (b) hf. (c) Wo. click here.
26) A 5.00-eV incident photon has a frequency of (a) 1.21x10-15Hz. (b) 1.21x1015Hz. (c) 2.21x1015Hz.
27) An ultraviolet photon of frequency 3.44x1015Hz has an energy, hf, of (a) 14.2 eV. (b) 2.27x10-18J. (c) a & b.
28) When 3.7-eV photons are incident on a 1.7-eV work function metal, each released electron has a K.E. of (a) 2.0eV. (b) 5.4eV. (c) 6.3eV. click here.
29) 4.7-eV photons are incident on a 1.7-eV work function metal. Each released electron has an energy of (a) 4.8x10-19J. (b) 3.0eV. (c) both a & b.
30) 3.7-eV photons are incident on a 1.7-eV work function metal. Each released electron has a speed of (a) 8.4x10-5m/s. (b) 8.4x10 5m/s. (c) 8.4x10-15m/s. click here.
31) A speed of 8.4x10-5m/s is not reasonable for a moving electron because (a) electrons always move at the speed of light. (b) this speed has a power of -5 that makes it very close to zero same as being stopped. (c) neither a nor b.
32) If the released electrons in a photoelectric effect have an average speed of 9.0x105 m/s and the energy of the incident photons on the average is 4.0eV, the work function of the metal is (a) 1.3eV. (b) 1.1eV. (c) 1.7eV. click here.
33) The wavelength associated with the motion of proton at a speed of 6.2x106 m/s is (a) 6.4x10-14m. (b) 9.4x10-14m. (c) 4.9x10-14m.
34) The diameter of hydrogen atom (the where about of its electronic cloud) is 0.1nm or 10-10m called "Angstrom." The diameter of the nucleus of the hydrogen atom is even 100,000 times smaller or 10-15m called "Femtometer (fm)." The wavelength associated with the moving proton in Question 33 is (a) 6.4fm. (b) 64fm. (c) 640fm. click here. | http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter040.htm | 13 |
63 | Reaching for a Star... (and finding its diameter!)
In this activity, students review standard metric units of measurement and practice using basic science classroom instruments. To help students relate to these units, they create their own list of approximate representations for selected units of measurement using familiar classroom objects. Students learn that extreme distances can be measured by using the ratios of similar triangles. Triangles are generated by shadows cast by the tree and another, easily measured item. Finally, students are given a challenge to develop a method to approximate the measurement of the diameter of the sun using mathematical ratios.[image right border]
- To give students their own set of metric unit references helping them understand and remember quantities that those metric units represent.
- To challenge students with an 'extreme' measurement situation and allow them to develop a solution to that problem.
- To encourage the use of a simple mathematical ratios to solve a huge measurement problem.
Context for Use
BackgroundFamiliarity with standard metric units of measurement is an important component of doing science and math. Effectively communicating one's finding with others is another important skill. Since English measurements are in common use in this country, students often struggle to relate to the quantities that metric units represent. This set of activities gives students practice with using the tools to obtain standard metric measurements of length, volume, mass, and temperature. Students will also develop a frame of reference when they create a chart of metric measurements of familiar objects. Students will also be challenged to create a measurement method for long distances: first they calculate the height of a tree using ratios and then they are challenged to do a similar calculation to measure the diameter of the sun. Research will show the students that the ancient Greeks were able to work out this problem by using mathematical ratios; no high-tech methods are required!
Description and Teaching Materials
1. Measuring activities are set up at various stations around the classroom: triple beam balances or digital scales for measuring mass, meter sticks and metric rulers for measuring various lengths, graduated cylinders and beakers for measuring volumes, thermometers for measuring temperatures. Each station should have an assortment of common objects or materials to be measured.
2. Students are introduced to the basic units of measuring length, volume, temperature, and mass in the metric system. The proper use of each of the measuring tools – and common sources of error - should be demonstrated by the instructor. Common mistakes include measuring from the edge of the ruler which may not be the beginning of the scale, not deducting the mass of the container when measuring a quantity of material on the triple beam balance, for example. Significant digits and sensitivity of the measuring devices should be addressed when using the various instruments.
3. Distribute a Metric Measurement Reference Chartto each student. Divide the students into small groups; one group for each of the stations. Working in groups, students should find an object that approximately measures each of the given amounts. Invite students to look beyond the provided objects if they want a challenge or wish to add more than one item to their chart.
To assist students who are struggling to match objects to measurements, a 'hint list' may be provided. Students work their way through each of the stations and complete the metric chart.
4. Review the completed charts with the students, checking any suspect measurements. Completed charts (and accurate) chart can be kept in the students' science binders for future reference.
5. Begin a group discussion on how to measure some 'extreme' distances or quantities. How could you measure the height of a tree, for example? How could you measure the distance to the sun? How about the diameter of the moon? Students can share their ideas with the class and discuss the challenges that making such measurements provides.
6. Revisit the 'extreme' measurement problem and ask students to share their ideas. Explain to the students that the ancient Greeks were able to work out this problem by using their knowledge of angles and ratios. This part of the activity can be expanded if math is the focus of the lesson and students have more skills with geometry. If time allows, students can explore more by researching the work of Aristarchus and Eratosthenes. Explain to students that similar triangles (same angles but different side lengths) can be compared to obtain a missing side length.
a. Have students draw a right triangle (graph paper makes it easy) that has sides 3cm X 4cm X 5cm.
b. Now have students draw a similar right triangle with sides that are twice as long (6cm X 8cm X ?). Without measuring the 3rd side, have students predict the size based on a ratio of the corresponding sides from both triangles. This method is outlined in detail on the Youtube videos listed below in Step 7.
c. Students should measure the sides to confirm that this ratio holds true.
7. Give students the challenge of measuring the height of a nearby tree (or building or flagpole). Allow students to discuss their ideas with each other and come up with possible solutions using the concept of similar triangles. If time permits, allow students to test out some their ideas if this can be safely done. You can show students any of the following YouTube videos: http://www.youtube.com/watch?v=31qq1zoQVHY&NR=1 or another good one is at http://www.youtube.com/watch?v=wHCFaGaKt-M&feature=related
Using any of the similar triangle methods shown, have students calculate the measurement of a nearby tall object. Students should compare their measurements for accuracy and discuss possible sources of error.
8. Finally, ask students how they could use what they have learned about setting up ratios and similar lengths and apply it to really huge distances, such as those in outer space. How could they calculate the approximate diameter of the sun? Allow students to share their ideas with each other.
9. *Remind students that they should never look directly at the sun; this will result in damage to their eyes.
A safe way to observe the sun is to create a pinhole viewer.
Create a pinhole viewer and a screen following the basic instructions at:
Other sources of information can be found at:
A very simple set-up for the classroom is shown at:
It will be helpful for the students to see how a pinhole viewer works by showing the projections of a flame through the pinhole (Experiment 1 on the Berkeley.edu site) before showing the projection of the sun's image (Experiment 2 ).
Adding a millimeter ruler to the screen allows for easy measurement of the projected image. You can mount the pinhole viewer on one end of a meter stick and the screen on the other end. Clamp the meter stick to a ring stand or lean it on a chair and aim the pinhole at the sun. Adjust the angle of the meter stick so that the shadows cast by both the pinhole viewer and the screen are aligned.
Allow students time to explore how they might now calculate the diameter of the sun by using the pinhole viewer. They could start with using the pinhole viewer to project an image of known size such as a light bulb. Students can explore calculating the diameter of the light bulb using the pinhole device and then verify their measurements by actually measuring the diameter of the light bulb. Encourage students to think about the information they have and how they could use this information to calculate diameters. Do they need more information to calculate the diameter of the sun?
10. Discuss the students' ideas and lead them in the direction of setting up a ratio of measurements. Ratios are like analogies: this is to that as this other thing is to that other thing:
the diameter of the sun is to the distance to the sun (from earth)
the diameter of the pinhole image is to the distance from the pinhole to the viewer
To put the relationship in mathematical form:
diameter of the sun (km) = diameter of the pinhole image (mm)
distance to the sun (km) distance from the pinhole to the image(mm)
Then, solving for the unknown:
diameter of the sun (km) = distance to the sun (km) X diameter of the pinhole image (mm) distance from the pinhole to the image (mm)
Have students do the research or provide them with the approximate distance from the earth to the sun: 149,600,000 kilometers; and allow them to solve for the unknown to calculate the diameter of the sun. Have students defend their answers and explain why this method works as a method of calculation for extreme distances.
11. For a challenge, students can attempt to calculate the diameter of the moon using a similar technique.
12. Have students compare their answers to what modern scientists have calculated for the sun's diameter. Students should also be asked to analyze any possible sources for errors in their calculations.
Metric Measurement Reference Chart (Microsoft Word 2007 (.docx) 32kB Jul27 11)
At Home AssignmentsAny of the research components of this lesson can be completed for homework. These topics include the contributions of Aristarchus and Eratosthenes, the distance to the sun (along with seasonal variations), and the currently held calculation of the earth's distance to the sun.
Materials- metric measuring tools: meter sticks and centimeter rulers, thermometers, graduated cylinders and beakers, triple beam balances
- assorted items for measuring
- long tape measure for measuring the shadow of the tree and mirrors (or pans of water)
- Metric Measurement Reference Chart for recording data
- cardboard, foil, tape, meter sticks, ring stands and clamps to build pinhole viewers
- research materials if investigating the work of Aristarchus and Eratosthenes.
StandardsMassachusetts Learning Standards:
Earth and Space Science, Grades 6-8(2006)
10. Compare and contrast properties and conditions of objects in the solar system (i.e., sun, planets, and moons) to those on Earth (i.e., gravitational force, distance from the sun, speed, movement, temperature, and atmospheric conditions).
Mathematics State Core Standards (2011)
7.RP Ratios and Proportional Relationships
Analyze proportional relationships and use them to solve real-world and mathematical problems.
7.EE Expressions and Equations
Use properties of operations to generate equivalent expressions.
Solve real-life and mathematical problems using numerical and algebraic expressions and equations.
Teaching Notes and Tips
- Students may struggle with the math portion of the activities, especially if they have not had much experience working with similar triangles and solving for unknowns using ratios. Check with the students' math teachers and either coordinate your teaching efforts, or time this lesson to follow the appropriate math units. Math teachers usually appreciate the reinforcement!
- Teacher Meeyoung Choi offers the idea of using a millimeter ruler attached to the viewer end of the pinhole device instead of using a grid or plain paper. This allows for easier measurement for any image that is projected onto the viewing screen.
- Building the components for the pinhole viewer would be a great project when you have some time to fill – the day before a holiday, for example. Then the pinhole lesson can go more quickly when the components are already built.
- Day 1 activities could easily be separated by quite a bit of time from Day 2 and 3 activities. Day 2 and 3 activities, however, should go together.
- Students should be able to explain the concept of similar triangles and how they can be utilized to measure very large distances
- Students should know the approximate measure of the distance from the earth to the sun and also the sun's approximate diameter in kilometers
References and Resources
Information on building and using pinhole viewers can be found in many places.
Three good sources are:
Information on using similar triangles/distances to calculate unknown distances can also be found in many middle school math textbooks. On-line sources that can be shared with students include:
The following website not only presents a good historical context for the activities, but also has detailed instructions on how to make a pinhole viewer and has a great review of using the math of similar triangles.
A good reference for distances and sizes of various solar system components: http://solarsystem.nasa.gov/planets/index.cfm
General references for information on the Earth, the Moon and Sun:
Elkins-Tanton, Linda T. The Earth and the Moon. New York: Facts on File, 2010. Print. ( a good discussion of the ancient Greek, Eratosthenes' efforts to measure the radius of the Earth – p. 6)
Elkins-Tanton, Linda T. The Sun, Mercury, and Venus. New York: Facts on File, 2010. Print. ( a good discussion of the ancient Greek, Eratosthenes' efforts to measure the radius of the Earth – p. 6) | http://serc.carleton.edu/spaceboston/2011activities/57580.html | 13 |
140 | In computing and electronic systems, binary-coded decimal (BCD) is an encoding for decimal numbers in which each digit is represented by its own binary sequence. Its main virtue is that it allows easy conversion to decimal digits for printing or display and faster decimal calculations. Its drawbacks are the increased complexity of circuits needed to implement mathematical operations and a relatively inefficient encoding—it occupies more space than a pure binary representation.
Though BCD is not as widely used as it once was, decimal fixed-point and floating-point are still important and still used in financial, commercial, and industrial computing; modern decimal floating-point representations use base-10 exponents, but not BCD encodings.
To BCD-encode a decimal number using the common encoding, each decimal digit is stored in a four-bit nibble.
Decimal: 0 1 2 3 4 5 6 7 8 9 BCD: 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001
Thus, the BCD encoding for the number 127 would be:
0001 0010 0111
Since most computers store data in eight-bit bytes, there are two common ways of storing four-bit BCD digits in those bytes:
- each digit is stored in one nibble of a byte, with the other nibble being set to all zeros, all ones (as in the EBCDIC code), or to 0011 (as in the ASCII code)
- two digits are stored in each byte.
Unlike binary-encoded numbers, BCD-encoded numbers can easily be displayed by mapping each of the nibbles to a different character. Converting a binary-encoded number to decimal for display is much harder, as this generally involves integer multiplication or divide operations.
BCD in electronics
BCD is very common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By utilizing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical 7-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple working throughout with BCD can lead to a simpler overall system than converting to 'pure' binary.
The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities.
A widely used variation of the two-digits-per-byte encoding is called packed BCD (or simply packed decimal). All of the upper bytes of a multi-byte word plus the upper four bits (nibble) of the lowest byte are used to store decimal integers. The lower four bits of the lowest byte are used as the sign flag. As an example, a 32 bit word contains 4 bytes or 8 nibbles. Packed BCD uses the upper 7 nibbles to store the integers of a decimal value and uses the lowest nibble to indicate the sign of those integers.
Standard sign values are 1100 (Ch) for positive (+) and 1101 (Dh) for negative (-). Other allowed signs are 1010 (Ah) and 1110 (Eh) for positive and 1011 (Bh) for negative. Some implementations also provide unsigned BCD values with a sign nibble of 1111 (Fh). In packed BCD, the number 127 is represented by "0001 0010 0111 1100" (127Ch) and -127 is represented by "0001 0010 0111 1101 (127Dh).
8 4 2 1
|A||1 0 1 0||+|
|B||1 0 1 1||−|
|C||1 1 0 0||+||Preferred|
|D||1 1 0 1||−||Preferred|
|E||1 1 1 0||+|
|F||1 1 1 1||+||Unsigned|
No matter how many bytes wide a word is, there are always an even number of nibbles because each byte has two of them. Therefore, a word of n bytes can contain up to (2n)-1 decimal digits, which is always an odd number of digits. A decimal number with d digits requires ½(d+1) bytes of storage space.
For example, a four-byte (32bit) word can hold seven decimal digits plus a sign, and can represent values ranging from ±9,999,999. Thus the number -1,234,567 is 7 digits wide and is encoded as:
0001 0010 0011 0100 0101 0110 0111 1101
(Note that, like character strings, the first byte of the packed decimal – with the most significant two digits – is usually stored in the lowest address in memory, independent of the endianness of the machine).
In contrast, a four-byte binary two's complement integer can represent values from −2,147,483,648 to +2,147,483,647.
While packed BCD does not make optimal use of storage (about 1/6 of the memory used is wasted), conversion to ASCII, EBCDIC, or the various encodings of Unicode is still trivial, as no arithmetic operations are required. The extra storage requirements are usually offset by the need for the accuracy that fixed-point decimal arithmetic provides. Denser packings of BCD exist which avoid the storage penalty and also need no arithmetic operations for common conversions.
Fixed-point packed decimal
Fixed-point decimal numbers are supported by some programming languages (such as COBOL and PL/I), and provide an implicit decimal point in front of one of the digits. For example, a packed decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when the implied decimal point is located between the 4th and 5th digits.
12 34 56 7C 12 34.56 7+
If a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 210 (1,024) is greater than 103 (1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings are Chen-Ho encoding and Densely Packed Decimal. The latter has the advantage that subsets of the encoding encode two digits in the optimal 7 bits and one digit in 4 bits, as in regular BCD.
Some implementations (notably IBM mainframe systems) support zoned decimal numeric representations. Each decimal digit is stored in one byte, with the lower four bits encoding the digit in BCD form. The upper four bits, called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to the digit. EBCDIC systems use a zone value of 1111 (hex F); this yields bytes in the range F0 to F9 (hex), which are the EBCDIC codes for the characters "0" through "9". Similarly, ASCII systems use a zone value of 0011 (hex 3), giving character codes 30 to 39 (hex).
For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the same set of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded as the hex bytes F1 F2 D3 represents the signed decimal value −123:
F1 F2 D3 1 2 −3
EBCDIC zoned decimal conversion table
|BCD Digit||EBCDIC Character||Hexadecimal|
(*) Note: These characters vary depending on the local character code page.
Fixed-point zoned decimal
Some languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicit decimal point at some location between the decimal digits of a number. For example, given a six-byte signed zoned decimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0 represent the value +1,279.50:
F1 F2 F7 F9 F5 C0 1 2 7 9. 5 +0
IBM and BCD
IBM used the terms binary-coded decimal and BCD for 6-bit alphameric codes that represented numbers, upper-case letters and special characters. Some variation of BCD alphamerics was used in most early IBM computers, including the IBM 1620, IBM 1400 series, and non-Decimal Architecture members of the IBM 700/7000 series.
Bit positions in BCD alphamerics were usually labelled B, A, 8, 4, 2 and 1. For encoding digits, B and A were zero. The letter A was encoded (B,A,1).
In the 1620 BCD alphamerics were encoded using digit pairs, with the "zone" in the even digit and the "digit" in the odd digit. Input/Output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes.
In the Decimal Architecture IBM 7070, IBM 7072, and IBM 7074 alphamerics were encoded using digit pairs (using two-out-of-five code in the digits, not BCD) of the 10-digit word, with the "zone" in the left digit and the "digit" in the right digit. Input/Output translation hardware converted between the internal digit pairs and the external standard six-bit BCD codes.
With the introduction of System/360, IBM expanded 6-bit BCD alphamerics to 8-bit EBCDIC, allowing the addition of many more characters (e.g., lowercase letters). A variable length Packed BCD numeric data type was also implemented.
Today, BCD data is still heavily used in IBM processors and databases, such as IBM DB2, mainframes, and Power6. In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), Packed BCD, or 'pure' BCD encoding. All of these are used within hardware registers and processing units, and in software.
Addition with BCD
It is possible to perform addition in BCD by first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 – 10) when the result has a value of greater-than 9. For example:
- 9 + 8 = 17 = + = [0001 0001] in binary.
However, in BCD, there cannot exist a value greater than 9 (1001) per nibble. To correct this, 6 (0110) is added to that sum to get the correct first two digits:
- [0001 0001] + [0000 0110] = [0001 0111]
which gives two nibbles, and , which correspond to "1" and "7" respectively. This gives 17 in BCD, which is the correct result. This technique can be extended to adding multiple digits, by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of a digit-pair sum to 9.
See also Douglas Jones' Tutorial.
Subtraction with BCD
Subtraction is done by adding the ten's complement of the subtrahend. To represent the sign of a number in BCD, the number 0000 is used to represent a positive number, and 1001 is used to represent a negative number. The remaining 14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 - 432.
In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking the nine's complement of 432, and then adding one. So, 999 - 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by the negative sign code, the number -432 can be represented. So, -432 in signed BCD is 1001 0101 0110 1000.
Now that both numbers are represented in signed BCD, they can be added together:
0000 0011 0101 0111 + 1001 0101 0110 1000 = 1001 1000 1011 1111 0 3 5 7 + 9 5 6 8 = 9 8 11 15
Since BCD is a form of decimal representation, several digits sums above are invalid. In the event that an invalid entry (any BCD digit greater than 1001) exists, simply add 6 to generate a carry bit and cause the sum to become a valid entry. The reason for adding 6 is because there are 16 possible 4-bit BCD values (since 24 = 16), but only 10 values are valid (0000 through 1001). So, adding 6 to the invalid entries results in the following:
1001 1000 1011 1111 + 0000 0000 0110 0110 = 1001 1001 0010 0101 9 8 11 15 0 0 6 6 9 9 2 5
So, the result of the subtraction is 1001 1001 0010 0101 (-925). To check the answer, note that the first bit is the sign bit, which is negative. This seems to be correct, since 357 - 432 should result in a negative number. To check the rest of the digits, represent them in decimal. 1001 0010 0101 is 925. The ten's complement of 935 is 999 - 925 = 074 + 1 = 75, so the calculated answer is -75. To check, perform standard subtraction to verify that 357 - 432 is -75.
Note that in the event that there are a different number of nybbles being added together (such as 1053 - 122), the number with the fewest number of digits must first be padded with zeros before taking the ten's complement or subtracting. So, in 1053 - 122, 122 would have to first be represented as 0122, and the ten's complement of 0122 would have to be calculated.
The binary-coded decimal scheme described in this article is the most common encoding, but there are many others. The method here can be referred to as Simple Binary-Coded Decimal (SBCD) or BCD 8421. In the headers to the table, the '8 4 2 1', etc., indicates the weight of each bit shown; note that in the 5th column two of the weights are negative. Both ASCII and EBCDIC character codes for the digits are examples of zoned BCD, and are also shown in the table.
The following table represents decimal digits from 0 to 9 in various BCD systems:
8 4 2 1
or Stibitz Code
|BCD 2 4 2 1|
or Aiken Code
8 4 −2 −1
|IBM 702 IBM 705|
IBM 7080 IBM 1401
8 4 2 1
|0||0000||0011||0000||0000||1010||0011 0000||1111 0000|
|1||0001||0100||0001||0111||0001||0011 0001||1111 0001|
|2||0010||0101||0010||0110||0010||0011 0010||1111 0010|
|3||0011||0110||0011||0101||0011||0011 0011||1111 0011|
|4||0100||0111||0100||0100||0100||0011 0100||1111 0100|
|5||0101||1000||1011||1011||0101||0011 0101||1111 0101|
|6||0110||1001||1100||1010||0110||0011 0110||1111 0110|
|7||0111||1010||1101||1001||0111||0011 0111||1111 0111|
|8||1000||1011||1110||1000||1000||0011 1000||1111 1000|
|9||1001||1100||1111||1111||1001||0011 1001||1111 1001|
In 1972, the U.S. Supreme Court overturned a lower court decision which had allowed a patent for converting BCD encoded numbers to binary on a computer (see Gottschalk v Benson). This was an important case in determining the patentability of software and algorithms.
Comparison with pure binary
- Many non-integral values, such as decimal 0.2, have an infinite place-value representation in binary (.001100110011...) but have a finite place-value in binary-coded decimal (0.2). Consequently a system based on binary-coded decimal representations of decimal fractions avoids errors representing and calculating such values.
- Scaling by a factor of 10 (or a power of 10) is simple; this is useful when a decimal scaling factor is needed to represent a non-integer quantity (e.g., in financial calculations)
- Rounding at a decimal digit boundary is simpler. Addition and subtraction in decimal does not require rounding.
- Alignment of two decimal numbers (for example 1.3 + 27.08) is a simple, exact, shift.
- Conversion to a character form or for display (e.g., to a text-based format such as XML, or to drive signals for a seven-segment display) is a simple per-digit mapping, and can be done in linear (O(n)) time. Conversion from pure binary involves relatively complex logic that spans digits, and for large numbers no linear-time conversion algorithm is known (see Binary numeral system).
- Some operations are more complex to implement. Adders require extra logic to cause them to wrap and generate a carry early. 15–20% more circuitry is needed for BCD add compared to pure binary. Multiplication requires the use of algorithms that are somewhat more complex than shift-mask-add (a binary multiplication, requiring binary shifts and adds or the equivalent, per-digit or group of digits is required)
- Standard BCD requires four bits per digit, roughly 20% more space than a binary encoding. When packed so that three digits are encoded in ten bits, the storage overhead is reduced to about 0.34%, at the expense of an encoding that is unaligned with the 8-bit byte boundaries common on existing hardware, resulting in slower implementations on these systems.
- Practical existing implementations of BCD are typically slower than operations on binary representations, especially on embedded systems, due to limited processor support for native BCD operations.
The BIOS in many PCs keeps the date and time in BCD format, probably for historical reasons (it avoided the need for binary to ASCII conversion).
Various BCD implementations exist that employ other representations for numbers. Programmable calculators manufactured by Texas Instruments, Hewlett-Packard, and others typically employ a floating-point BCD format, typically with two or three digits for the (decimal) exponent. The extra bits of the sign digit may be used to indicate special numeric values, such as infinity, underflow/overflow, and error (a blinking display).
If error in representation and computation is the primary concern, rather than efficiency of conversion to and from display form, a scaled binary representation may be used, which stores a decimal number as a binary-encoded integer and a binary-encoded signed decimal exponent. For example, 0.2 can be represented as 2Template:E. This representation allows rapid multiplication and division, but may require multiplication by a power of 10 during addition and subtraction to align the decimals. It is particularly appropriate for applications with a fixed number of decimal places, which do not require adjustment during addition and subtraction and need not store the exponent explicitly.
Chen-Ho encoding provides a boolean transformation for converting groups of three BCD-encoded digits to and from 10-bit values that can be efficiently encoded in hardware with only 2 or 3 gate delays. Densely Packed Decimal is a similar scheme that deals more efficiently and conveniently with the case where the number of digits is not a multiple of 3.
- Arithmetic Operations in Digital Computers, R. K. Richards, 397pp, D. Van Nostrand Co., NY, 1955
- Schmid, Hermann, Decimal computation, ISBN 047176180X, 266pp, Wiley, 1974
- Superoptimizer: A Look at the Smallest Program, Henry Massalin, ACM Sigplan Notices, Vol. 22 #10 (Proceedings of the Second International Conference on Architectural support for Programming Languages and Operating Systems), pp122-126, ACM, also IEEE Computer Society Press #87CH2440-6, October 1987
- VLSI designs for redundant binary-coded decimal addition, Behrooz Shirazi, David Y. Y. Yun, and Chang N. Zhang, IEEE Seventh Annual International Phoenix Conference on Computers and Communications, 1988, pp52-56, IEEE, March 1988
- Fundamentals of Digital Logic by Brown and Vranesic, 2003
- Modified Carry Look Ahead BCD Adder With CMOS and Reversible Logic Implementation, Himanshu Thapliyal and Hamid R. Arabnia, Proceedings of the 2006 International Conference on Computer Design (CDES'06), ISBN 1-60132-009-4, pp64-69, CSREA Press, November 2006
- Reversible Implementation of Densely-Packed-Decimal Converter to and from Binary-Coded-Decimal Format Using in IEEE-754R, A. Kaivani, A. Zaker Alhosseini, S. Gorgin, and M. Fazlali, 9th International Conference on Information Technology (ICIT'06), pp273-276, IEEE, December 2006.
See also the Decimal Arithmetic Bibliography | http://bluwiki.com/go/BCD | 13 |
54 | In electronics, a diode is a two-terminal electronic component with asymmetric conductance, it has low (ideally zero) resistance to current flow in one direction, and high (ideally infinite) resistance in the other. A semiconductor diode, the most common type today, is a crystalline piece of semiconductor material with a p–n junction connected to two electrical terminals. A vacuum tube diode is a vacuum tube with two electrodes, a plate (anode) and a heated cathode.
The most common function of a diode is to allow an electric current to pass in one direction (called the diode's forward direction), while blocking current in the opposite direction (the reverse direction). Thus, the diode can be viewed as an electronic version of a check valve. This unidirectional behavior is called rectification, and is used to convert alternating current to direct current, including extraction of modulation from radio signals in radio receivers—these diodes are forms of rectifiers.
However, diodes can have more complicated behavior than this simple on–off action. Semiconductor diodes begin conducting electricity only if a certain threshold voltage or cut-in voltage is present in the forward direction (a state in which the diode is said to be forward-biased). The voltage drop across a forward-biased diode varies only a little with the current, and is a function of temperature; this effect can be used as a temperature sensor or voltage reference.
Semiconductor diodes' nonlinear current–voltage characteristic can be tailored by varying the semiconductor materials and doping, introducing impurities into the materials. These are exploited in special-purpose diodes that perform many different functions. For example, diodes are used to regulate voltage (Zener diodes), to protect circuits from high voltage surges (avalanche diodes), to electronically tune radio and TV receivers (varactor diodes), to generate radio frequency oscillations (tunnel diodes, Gunn diodes, IMPATT diodes), and to produce light (light emitting diodes). Tunnel diodes exhibit negative resistance, which makes them useful in some types of circuits.
Diodes were the first semiconductor electronic devices. The discovery of crystals' rectifying abilities was made by German physicist Ferdinand Braun in 1874. The first semiconductor diodes, called cat's whisker diodes, developed around 1906, were made of mineral crystals such as galena. Today most diodes are made of silicon, but other semiconductors such as germanium are sometimes used.
Thermionic (vacuum tube) diodes and solid state (semiconductor) diodes were developed separately, at approximately the same time, in the early 1900s, as radio receiver detectors. Until the 1950s vacuum tube diodes were more often used in radios because semiconductor alternatives (Cat's Whiskers) were less stable, and because most receiving sets would have vacuum tubes for amplification that could easily have diodes included in the tube (for example the 12SQ7 double-diode triode), and vacuum tube rectifiers and gas-filled rectifiers handled some high voltage/high current rectification tasks beyond the capabilities of semiconductor diodes (such as selenium rectifiers) available at the time.
Discovery of vacuum tube diodes
In 1873, Frederick Guthrie discovered the basic principle of operation of thermionic diodes. Guthrie discovered that a positively charged electroscope could be discharged by bringing a grounded piece of white-hot metal close to it (but not actually touching it). The same did not apply to a negatively charged electroscope, indicating that the current flow was only possible in one direction.
Thomas Edison independently rediscovered the principle on February 13, 1880. At the time, Edison was investigating why the filaments of his carbon-filament light bulbs nearly always burned out at the positive-connected end. He had a special bulb made with a metal plate sealed into the glass envelope. Using this device, he confirmed that an invisible current flowed from the glowing filament through the vacuum to the metal plate, but only when the plate was connected to the positive supply.
Edison devised a circuit where his modified light bulb effectively replaced the resistor in a DC voltmeter. Edison was awarded a patent for this invention in 1884. Since there was no apparent practical use for such a device at the time, the patent application was most likely simply a precaution in case someone else did find a use for the so-called Edison effect.
About 20 years later, John Ambrose Fleming (scientific adviser to the Marconi Company and former Edison employee) realized that the Edison effect could be used as a precision radio detector. Fleming patented the first true thermionic diode, the Fleming valve, in Britain on November 16, 1904 (followed by U.S. Patent 803,684 in November 1905).
In 1874 German scientist Karl Ferdinand Braun discovered the "unilateral conduction" of crystals. Braun patented the crystal rectifier in 1899. Copper oxide and selenium rectifiers were developed for power applications in the 1930s.
Indian scientist Jagadish Chandra Bose was the first to use a crystal for detecting radio waves in 1894. He also worked with microwaves in the centimeter and also the millimeter range. The crystal detector was developed into a practical device for wireless telegraphy by Greenleaf Whittier Pickard, who invented a silicon crystal detector in 1903 and received a patent for it on November 20, 1906. Other experimenters tried a variety of other substances, of which the most widely used was the mineral galena (lead sulfide). Other substances offered slightly better performance, but galena was most widely used because it had the advantage of being cheap and easy to obtain. The crystal detector in these early crystal radio sets consisted of an adjustable wire point-contact (the so-called "cat's whisker"), which could be manually moved over the face of the crystal in order to obtain optimum signal. This troublesome device was superseded by thermionic diodes by the 1920s, but after high purity semiconductor materials became available, the crystal detector returned to dominant use with the advent of inexpensive fixed-germanium diodes in the 1950s. Bell Labs also developed a germanium diode for microwave reception, and AT&T used these in their microwave towers that criss-crossed the nation starting in the late 1940s, carrying telephone and network television signals. Bell Labs did not develop a satisfactory thermionic diode for microwave reception.
At the time of their invention, such devices were known as rectifiers. In 1919, the year tetrodes were invented, William Henry Eccles coined the term diode from the Greek roots di (from δί), meaning "two", and ode (from ὁδός), meaning "path". (However, the word diode itself, as well as triode, tetrode, penthode, hexode, was already in use as a term of multiplex telegraphy; see, for example, The telegraphic journal and electrical review, September 10, 1886, p. 252).
- power supply (half-wave or full-wave or bridge) rectifiers
- CRT (especially TV) Extra-high voltage flyback, "damper" or "booster" diodes such as the 6AU4GTA.
Thermionic diodes are thermionic-valve devices (also known as vacuum tubes, tubes, or valves), which are arrangements of electrodes surrounded by a vacuum within a glass envelope. Early examples were fairly similar in appearance to incandescent light bulbs.
In thermionic-valve diodes, a current through the heater filament indirectly heats the thermionic cathode, another internal electrode treated with a mixture of barium and strontium oxides, which are oxides of alkaline earth metals; these substances are chosen because they have a small work function. (Some valves use direct heating, in which a tungsten filament acts as both heater and cathode.) The heat causes thermionic emission of electrons into the vacuum. In forward operation, a surrounding metal electrode called the anode is positively charged so that it electrostatically attracts the emitted electrons. However, electrons are not easily released from the unheated anode surface when the voltage polarity is reversed. Hence, any reverse flow is negligible.
In a mercury-arc valve, an arc forms between a refractory conductive anode and a pool of liquid mercury acting as cathode. Such units were made with ratings up to hundreds of kilowatts, and were important in the development of HVDC power transmission. Some types of smaller thermionic rectifiers sometimes had mercury vapor fill to reduce their forward voltage drop and to increase current rating over thermionic hard-vacuum devices.
Until the development of semiconductor diodes, valve diodes were used in analog signal applications and as rectifiers in many power supplies. They rapidly ceased to be used for most purposes, an exception being some high-voltage high-current applications subject to large transient peaks, where their robustness to abuse still makes them the best choice. As of 2012[update] some enthusiasts favoured vacuum tube amplifiers for audio applications, sometimes using valve rather than semiconductor rectifiers.
The symbol used for a semiconductor diode in a circuit diagram specifies the type of diode. There are alternate symbols for some types of diodes, though the differences are minor.
Light Emitting Diode (LED)
A point-contact diode works the same as the junction diodes described below, but their construction is simpler. A block of n-type semiconductor is built, and a conducting sharp-point contact made with some group-3 metal is placed in contact with the semiconductor. Some metal migrates into the semiconductor to make a small region of p-type semiconductor near the contact. The long-popular 1N34 germanium version is still used in radio receivers as a detector and occasionally in specialized analog electronics.
Most diodes today are silicon junction diodes. A junction is formed between the p and n regions which is also called a depletion region.
p–n junction diode
A p–n junction diode is made of a crystal of semiconductor. Impurities are added to it to create a region on one side that contains negative charge carriers (electrons), called n-type semiconductor, and a region on the other side that contains positive charge carriers (holes), called p-type semiconductor. When two materials i.e. n-type and p-type are attached together, a momentary flow of electrons occur from n to p side resulting in a third region where no charge carriers are present. It is called Depletion region due to the absence of charge carriers (electrons and holes in this case). The diode's terminals are attached to each of these regions. The boundary between these two regions, called a p–n junction, is where the action of the diode takes place. The crystal allows electrons to flow from the N-type side (called the cathode) to the P-type side (called the anode), but not in the opposite direction.
A semiconductor diode's behavior in a circuit is given by its current–voltage characteristic, or I–V graph (see graph below). The shape of the curve is determined by the transport of charge carriers through the so-called depletion layer or depletion region that exists at the p–n junction between differing semiconductors. When a p–n junction is first created, conduction-band (mobile) electrons from the N-doped region diffuse into the P-doped region where there is a large population of holes (vacant places for electrons) with which the electrons "recombine". When a mobile electron recombines with a hole, both hole and electron vanish, leaving behind an immobile positively charged donor (dopant) on the N side and negatively charged acceptor (dopant) on the P side. The region around the p–n junction becomes depleted of charge carriers and thus behaves as an insulator.
However, the width of the depletion region (called the depletion width) cannot grow without limit. For each electron–hole pair that recombines, a positively charged dopant ion is left behind in the N-doped region, and a negatively charged dopant ion is left behind in the P-doped region. As recombination proceeds more ions are created, an increasing electric field develops through the depletion zone that acts to slow and then finally stop recombination. At this point, there is a "built-in" potential across the depletion zone.
If an external voltage is placed across the diode with the same polarity as the built-in potential, the depletion zone continues to act as an insulator, preventing any significant electric current flow (unless electron/hole pairs are actively being created in the junction by, for instance, light. see photodiode). This is the reverse bias phenomenon. However, if the polarity of the external voltage opposes the built-in potential, recombination can once again proceed, resulting in substantial electric current through the p–n junction (i.e. substantial numbers of electrons and holes recombine at the junction). For silicon diodes, the built-in potential is approximately 0.7 V (0.3 V for Germanium and 0.2 V for Schottky). Thus, if an external current is passed through the diode, about 0.7 V will be developed across the diode such that the P-doped region is positive with respect to the N-doped region and the diode is said to be "turned on" as it has a forward bias.
A diode's I–V characteristic can be approximated by four regions of operation.
At very large reverse bias, beyond the peak inverse voltage or PIV, a process called reverse breakdown occurs that causes a large increase in current (i.e., a large number of electrons and holes are created at, and move away from the p–n junction) that usually damages the device permanently. The avalanche diode is deliberately designed for use in the avalanche region. In the Zener diode, the concept of PIV is not applicable. A Zener diode contains a heavily doped p–n junction allowing electrons to tunnel from the valence band of the p-type material to the conduction band of the n-type material, such that the reverse voltage is "clamped" to a known value (called the Zener voltage), and avalanche does not occur. Both devices, however, do have a limit to the maximum current and power in the clamped reverse-voltage region. Also, following the end of forward conduction in any diode, there is reverse current for a short time. The device does not attain its full blocking capability until the reverse current ceases.
The second region, at reverse biases more positive than the PIV, has only a very small reverse saturation current. In the reverse bias region for a normal P–N rectifier diode, the current through the device is very low (in the µA range). However, this is temperature dependent, and at sufficiently high temperatures, a substantial amount of reverse current can be observed (mA or more).
The third region is forward but small bias, where only a small forward current is conducted.
As the potential difference is increased above an arbitrarily defined "cut-in voltage" or "on-voltage" or "diode forward voltage drop (Vd)", the diode current becomes appreciable (the level of current considered "appreciable" and the value of cut-in voltage depends on the application), and the diode presents a very low resistance. The current–voltage curve is exponential. In a normal silicon diode at rated currents, the arbitrary cut-in voltage is defined as 0.6 to 0.7 volts. The value is different for other diode types—Schottky diodes can be rated as low as 0.2 V, Germanium diodes 0.25 to 0.3 V, and red or blue light-emitting diodes (LEDs) can have values of 1.4 V and 4.0 V respectively.
At higher currents the forward voltage drop of the diode increases. A drop of 1 V to 1.5 V is typical at full rated current for power diodes.
Shockley diode equation
The Shockley ideal diode equation or the diode law (named after transistor co-inventor William Bradford Shockley) gives the I–V characteristic of an ideal diode in either forward or reverse bias (or no bias). The Shockley ideal diode equation is below, where n, the ideality factor, is equal to 1 :
- I is the diode current,
- IS is the reverse bias saturation current (or scale current),
- VD is the voltage across the diode,
- VT is the thermal voltage, and
- n is the ideality factor, also known as the quality factor or sometimes emission coefficient. The ideality factor n typically varies from 1 to 2 (though can in some cases be higher), depending on the fabrication process and semiconductor material and in many cases is assumed to be approximately equal to 1 (thus the notation n is omitted). The ideality factor does not form part of the Shockley ideal diode equation, and was added to account for imperfect junctions as observed in real transistors. By setting n = 1 above, the equation reduces to the Shockley ideal diode equation.
The thermal voltage VT is approximately 25.85 mV at 300 K, a temperature close to "room temperature" commonly used in device simulation software. At any temperature it is a known constant defined by:
The reverse saturation current, IS, is not constant for a given device, but varies with temperature; usually more significantly than VT, so that VD typically decreases as T increases.
The Shockley ideal diode equation or the diode law is derived with the assumption that the only processes giving rise to the current in the diode are drift (due to electrical field), diffusion, and thermal recombination–generation (R–G) (this equation is derived by setting n = 1 above). It also assumes that the R–G current in the depletion region is insignificant. This means that the Shockley ideal diode equation doesn't account for the processes involved in reverse breakdown and photon-assisted R–G. Additionally, it doesn't describe the "leveling off" of the I–V curve at high forward bias due to internal resistance. Introducing the ideality factor, n, accounts for recombination and generation of carriers.
Under reverse bias voltages (see Figure 5) the exponential in the diode equation is negligible, and the current is a constant (negative) reverse current value of −IS. The reverse breakdown region is not modeled by the Shockley diode equation.
For even rather small forward bias voltages (see Figure 5) the exponential is very large because the thermal voltage is very small, so the subtracted '1' in the diode equation is negligible and the forward diode current is often approximated as
The use of the diode equation in circuit problems is illustrated in the article on diode modeling.
For circuit design, a small-signal model of the diode behavior often proves useful. A specific example of diode modeling is discussed in the article on small-signal circuits.
Following the end of forward conduction in a p–n type diode, a reverse current flows for a short time. The device does not attain its blocking capability until the mobile charge in the junction is depleted.
The effect can be significant when switching large currents very quickly. A certain amount of "reverse recovery time" tr (on the order of tens of nanoseconds to a few microseconds) may be required to remove the reverse recovery charge Qr from the diode. During this recovery time, the diode can actually conduct in the reverse direction. In certain real-world cases it can be important to consider the losses incurred by this non-ideal diode effect. However, when the slew rate of the current is not so severe (e.g. Line frequency) the effect can be safely ignored. For most applications, the effect is also negligible for Schottky diodes.
The reverse current ceases abruptly when the stored charge is depleted; this abrupt stop is exploited in step recovery diodes for generation of extremely short pulses.
Types of semiconductor diode
There are several types of p–n junction diodes, which either emphasize a different physical aspect of a diode often by geometric scaling, doping level, choosing the right electrodes, are just an application of a diode in a special circuit, or are really different devices like the Gunn and laser diode and the MOSFET:
Normal (p–n) diodes, which operate as described above, are usually made of doped silicon or, more rarely, germanium. Before the development of silicon power rectifier diodes, cuprous oxide and later selenium was used; its low efficiency gave it a much higher forward voltage drop (typically 1.4 to 1.7 V per "cell", with multiple cells stacked to increase the peak inverse voltage rating in high voltage rectifiers), and required a large heat sink (often an extension of the diode's metal substrate), much larger than a silicon diode of the same current ratings would require. The vast majority of all diodes are the p–n diodes found in CMOS integrated circuits, which include two diodes per pin and many other internal diodes.
- Diodes that conduct in the reverse direction when the reverse bias voltage exceeds the breakdown voltage. These are electrically very similar to Zener diodes, and are often mistakenly called Zener diodes, but break down by a different mechanism, the avalanche effect. This occurs when the reverse electric field across the p–n junction causes a wave of ionization, reminiscent of an avalanche, leading to a large current. Avalanche diodes are designed to break down at a well-defined reverse voltage without being destroyed. The difference between the avalanche diode (which has a reverse breakdown above about 6.2 V) and the Zener is that the channel length of the former exceeds the mean free path of the electrons, so there are collisions between them on the way out. The only practical difference is that the two types have temperature coefficients of opposite polarities.
- These are a type of point-contact diode. The cat's whisker diode consists of a thin or sharpened metal wire pressed against a semiconducting crystal, typically galena or a piece of coal. The wire forms the anode and the crystal forms the cathode. Cat's whisker diodes were also called crystal diodes and found application in crystal radio receivers. Cat's whisker diodes are generally obsolete, but may be available from a few manufacturers.
- These are actually a JFET with the gate shorted to the source, and function like a two-terminal current-limiter analog to the Zener diode, which is limiting voltage. They allow a current through them to rise to a certain value, and then level off at a specific value. Also called CLDs, constant-current diodes, diode-connected transistors, or current-regulating diodes.
- These have a region of operation showing negative resistance caused by quantum tunneling, allowing amplification of signals and very simple bistable circuits. Due to the high carrier concentration, tunnel diodes are very fast, may be used at low (mK) temperatures, high magnetic fields, and in high radiation environments. Because of these properties, they are often used in spacecraft.
- These are similar to tunnel diodes in that they are made of materials such as GaAs or InP that exhibit a region of negative differential resistance. With appropriate biasing, dipole domains form and travel across the diode, allowing high frequency microwave oscillators to be built.
Light-emitting diodes (LEDs)
- In a diode formed from a direct band-gap semiconductor, such as gallium arsenide, carriers that cross the junction emit photons when they recombine with the majority carrier on the other side. Depending on the material, wavelengths (or colors) from the infrared to the near ultraviolet may be produced. The forward potential of these diodes depends on the wavelength of the emitted photons: 2.1 V corresponds to red, 4.0 V to violet. The first LEDs were red and yellow, and higher-frequency diodes have been developed over time. All LEDs produce incoherent, narrow-spectrum light; "white" LEDs are actually combinations of three LEDs of a different color, or a blue LED with a yellow scintillator coating. LEDs can also be used as low-efficiency photodiodes in signal applications. An LED may be paired with a photodiode or phototransistor in the same package, to form an opto-isolator.
- When an LED-like structure is contained in a resonant cavity formed by polishing the parallel end faces, a laser can be formed. Laser diodes are commonly used in optical storage devices and for high speed optical communication.
- This term is used both for conventional p–n diodes used to monitor temperature due to their varying forward voltage with temperature, and for Peltier heat pumps for thermoelectric heating and cooling. Peltier heat pumps may be made from semiconductor, though they do not have any rectifying junctions, they use the differing behaviour of charge carriers in N and P type semiconductor to move heat.
- All semiconductors are subject to optical charge carrier generation. This is typically an undesired effect, so most semiconductors are packaged in light blocking material. Photodiodes are intended to sense light(photodetector), so they are packaged in materials that allow light to pass, and are usually PIN (the kind of diode most sensitive to light). A photodiode can be used in solar cells, in photometry, or in optical communications. Multiple photodiodes may be packaged in a single device, either as a linear array or as a two-dimensional array. These arrays should not be confused with charge-coupled devices.
- A PIN diode has a central un-doped, or intrinsic, layer, forming a p-type/intrinsic/n-type structure. They are used as radio frequency switches and attenuators. They are also used as large volume ionizing radiation detectors and as photodetectors. PIN diodes are also used in power electronics, as their central layer can withstand high voltages. Furthermore, the PIN structure can be found in many power semiconductor devices, such as IGBTs, power MOSFETs, and thyristors.
- Schottky diodes are constructed from a metal to semiconductor contact. They have a lower forward voltage drop than p–n junction diodes. Their forward voltage drop at forward currents of about 1 mA is in the range 0.15 V to 0.45 V, which makes them useful in voltage clamping applications and prevention of transistor saturation. They can also be used as low loss rectifiers, although their reverse leakage current is in general higher than that of other diodes. Schottky diodes are majority carrier devices and so do not suffer from minority carrier storage problems that slow down many other diodes—so they have a faster reverse recovery than p–n junction diodes. They also tend to have much lower junction capacitance than p–n diodes, which provides for high switching speeds and their use in high-speed circuitry and RF devices such as switched-mode power supply, mixers, and detectors.
Super barrier diodes
- Super barrier diodes are rectifier diodes that incorporate the low forward voltage drop of the Schottky diode with the surge-handling capability and low reverse leakage current of a normal p–n junction diode.
- As a dopant, gold (or platinum) acts as recombination centers, which helps a fast recombination of minority carriers. This allows the diode to operate at signal frequencies, at the expense of a higher forward voltage drop. Gold-doped diodes are faster than other p–n diodes (but not as fast as Schottky diodes). They also have less reverse-current leakage than Schottky diodes (but not as good as other p–n diodes). A typical example is the 1N914.
Snap-off or Step recovery diodes
- The term step recovery relates to the form of the reverse recovery characteristic of these devices. After a forward current has been passing in an SRD and the current is interrupted or reversed, the reverse conduction will cease very abruptly (as in a step waveform). SRDs can, therefore, provide very fast voltage transitions by the very sudden disappearance of the charge carriers.
Stabistors or Forward Reference Diodes
- The term stabistor refers to a special type of diodes featuring extremely stable forward voltage characteristics. These devices are specially designed for low-voltage stabilization applications requiring a guaranteed voltage over a wide current range and highly stable over temperature.
- These are avalanche diodes designed specifically to protect other semiconductor devices from high-voltage transients. Their p–n junctions have a much larger cross-sectional area than those of a normal diode, allowing them to conduct large currents to ground without sustaining damage.
Varicap or varactor diodes
- These are used as voltage-controlled capacitors. These are important in PLL (phase-locked loop) and FLL (frequency-locked loop) circuits, allowing tuning circuits, such as those in television receivers, to lock quickly. They also enabled tunable oscillators in early discrete tuning of radios, where a cheap and stable, but fixed-frequency, crystal oscillator provided the reference frequency for a voltage-controlled oscillator.
- Diodes that can be made to conduct backward. This effect, called Zener breakdown, occurs at a precisely defined voltage, allowing the diode to be used as a precision voltage reference. In practical voltage reference circuits, Zener and switching diodes are connected in series and opposite directions to balance the temperature coefficient to near-zero. Some devices labeled as high-voltage Zener diodes are actually avalanche diodes (see above). Two (equivalent) Zeners in series and in reverse order, in the same package, constitute a transient absorber (or Transorb, a registered trademark). The Zener diode is named for Dr. Clarence Melvin Zener of Carnegie Mellon University, inventor of the device.
Other uses for semiconductor diodes include sensing temperature, and computing analog logarithms (see Operational amplifier applications#Logarithmic_output).
Numbering and coding schemes
The standardized 1N-series numbering EIA370 system was introduced in the US by EIA/JEDEC (Joint Electron Device Engineering Council) about 1960. Among the most popular in this series were: 1N34A/1N270 (Germanium signal), 1N914/1N4148 (Silicon signal), 1N4001-1N4007 (Silicon 1A power rectifier) and 1N54xx (Silicon 3A power rectifier)
The JIS semiconductor designation system has all semiconductor diode designations starting with "1S".
The European Pro Electron coding system for active components was introduced in 1966 and comprises two letters followed by the part code. The first letter represents the semiconductor material used for the component (A = Germanium and B = Silicon) and the second letter represents the general function of the part (for diodes: A = low-power/signal, B = Variable capacitance, X = Multiplier, Y = Rectifier and Z = Voltage reference), for example:
- AA-series germanium low-power/signal diodes (e.g.: AA119)
- BA-series silicon low-power/signal diodes (e.g.: BAT18 Silicon RF Switching Diode)
- BY-series silicon rectifier diodes (e.g.: BY127 1250V, 1A rectifier diode)
- BZ-series silicon Zener diodes (e.g.: BZY88C4V7 4.7V Zener diode)
Other common numbering / coding systems (generally manufacturer-driven) include:
- GD-series germanium diodes (e.g.: GD9) – this is a very old coding system
- OA-series germanium diodes (e.g.: OA47) – a coding sequence developed by Mullard, a UK company
As well as these common codes, many manufacturers or organisations have their own systems too – for example:
- HP diode 1901-0044 = JEDEC 1N4148
- UK military diode CV448 = Mullard type OA81 = GEC type GEX23
In optics, an equivalent device for the diode but with laser light would be the Optical isolator, also known as an Optical Diode, that allows light to only pass in one direction. It uses a Faraday rotator as the main component.
The first use for the diode was the demodulation of amplitude modulated (AM) radio broadcasts. The history of this discovery is treated in depth in the radio article. In summary, an AM signal consists of alternating positive and negative peaks of a radio carrier wave, whose amplitude or envelope is proportional to the original audio signal. The diode (originally a crystal diode) rectifies the AM radio frequency signal, leaving only the positive peaks of the carrier wave. The audio is then extracted from the rectified carrier wave using a simple filter and fed into an audio amplifier or transducer, which generates sound waves.
Rectifiers are constructed from diodes, where they are used to convert alternating current (AC) electricity into direct current (DC). Automotive alternators are a common example, where the diode, which rectifies the AC into DC, provides better performance than the commutator or earlier, dynamo. Similarly, diodes are also used in Cockcroft–Walton voltage multipliers to convert AC into higher DC voltages.
Diodes are frequently used to conduct damaging high voltages away from sensitive electronic devices. They are usually reverse-biased (non-conducting) under normal circumstances. When the voltage rises above the normal range, the diodes become forward-biased (conducting). For example, diodes are used in (stepper motor and H-bridge) motor controller and relay circuits to de-energize coils rapidly without the damaging voltage spikes that would otherwise occur. (Any diode used in such an application is called a flyback diode). Many integrated circuits also incorporate diodes on the connection pins to prevent external voltages from damaging their sensitive transistors. Specialized diodes are used to protect from over-voltages at higher power (see Diode types above).
Ionizing radiation detectors
In addition to light, mentioned above, semiconductor diodes are sensitive to more energetic radiation. In electronics, cosmic rays and other sources of ionizing radiation cause noise pulses and single and multiple bit errors. This effect is sometimes exploited by particle detectors to detect radiation. A single particle of radiation, with thousands or millions of electron volts of energy, generates many charge carrier pairs, as its energy is deposited in the semiconductor material. If the depletion layer is large enough to catch the whole shower or to stop a heavy particle, a fairly accurate measurement of the particle's energy can be made, simply by measuring the charge conducted and without the complexity of a magnetic spectrometer, etc. These semiconductor radiation detectors need efficient and uniform charge collection and low leakage current. They are often cooled by liquid nitrogen. For longer-range (about a centimetre) particles, they need a very large depletion depth and large area. For short-range particles, they need any contact or un-depleted semiconductor on at least one surface to be very thin. The back-bias voltages are near breakdown (around a thousand volts per centimetre). Germanium and silicon are common materials. Some of these detectors sense position as well as energy. They have a finite life, especially when detecting heavy particles, because of radiation damage. Silicon and germanium are quite different in their ability to convert gamma rays to electron showers.
Semiconductor detectors for high-energy particles are used in large numbers. Because of energy loss fluctuations, accurate measurement of the energy deposited is of less use.
A diode can be used as a temperature measuring device, since the forward voltage drop across the diode depends on temperature, as in a silicon bandgap temperature sensor. From the Shockley ideal diode equation given above, it might appear that the voltage has a positive temperature coefficient (at a constant current), but usually the variation of the reverse saturation current term is more significant than the variation in the thermal voltage term. Most diodes therefore have a negative temperature coefficient, typically −2 mV/˚C for silicon diodes at room temperature. This is approximately linear for temperatures above about 20 kelvins. Some graphs are given for: 1N400x series, and CY7 cryogenic temperature sensor.
Diodes will prevent currents in unintended directions. To supply power to an electrical circuit during a power failure, the circuit can draw current from a battery. An uninterruptible power supply may use diodes in this way to ensure that current is only drawn from the battery when necessary. Likewise, small boats typically have two circuits each with their own battery/batteries: one used for engine starting; one used for domestics. Normally, both are charged from a single alternator, and a heavy-duty split-charge diode is used to prevent the higher-charge battery (typically the engine battery) from discharging through the lower-charge battery when the alternator is not running.
Diodes are also used in electronic musical keyboards. To reduce the amount of wiring needed in electronic musical keyboards, these instruments often use keyboard matrix circuits. The keyboard controller scans the rows and columns to determine which note the player has pressed. The problem with matrix circuits is that, when several notes are pressed at once, the current can flow backwards through the circuit and trigger "phantom keys" that cause "ghost" notes to play. To avoid triggering unwanted notes, most keyboard matrix circuits have diodes soldered with the switch under each key of the musical keyboard. The same principle is also used for the switch matrix in solid-state pinball machines.
Two-terminal nonlinear devices
Many other two-terminal nonlinear devices exist, for example a neon lamp has two terminals in a glass envelope and has interesting and useful nonlinear properties. Lamps including arc-discharge lamps, incandescent lamps, fluorescent lamps and mercury vapor lamps have two terminals and display nonlinear current–voltage characteristics.
- Tooley, Mike (2012). Electronic Circuits: Fundamentals and Applications, 3rd Ed.. Routlege. p. 81. ISBN 1-136-40731-6.
- Lowe, Doug (2013). "Electronics Components: Diodes". Electronics All-In-One Desk Reference For Dummies. John Wiley & Sons. Retrieved January 4, 2013.
- Crecraft, David; Stephen Gergely (2002). Analog Electronics: Circuits, Systems and Signal Processing. Butterworth-Heinemann. p. 110. ISBN 0-7506-5095-8.
- Horowitz, Paul; Winfield Hill (1989). The Art of Electronics, 2nd Ed.. London: Cambridge University Press. p. 44. ISBN 0-521-37095-7.
- "Physical Explanation – General Semiconductors". 2010-05-25. Retrieved 2010-08-06.
- "The Constituents of Semiconductor Components". 2010-05-25. Retrieved 2010-08-06.
- 1928 Nobel Lecture: Owen W. Richardson, "Thermionic phenomena and the laws which govern them," December 12, 1929
- Thomas A. Edison "Electrical Meter" U.S. Patent 307,030 Issue date: Oct 21, 1884
- "Road to the Transistor". Jmargolin.com. Retrieved 2008-09-22.
- Historical lecture on Karl Braun
- "Diode". Encyclobeamia.solarbotics.net. Retrieved 2010-08-06.
- Emerson, D. T. (Dec. 1997). "The work of Jagadish Chandra Bose: 100 years of mm wave research". IEEE Transactions on Microwave Theory and Techniques 45 (12): 2267–2273. Bibcode:1997ITMTT..45.2267E. doi:10.1109/22.643830. Retrieved 2010-01-19.
- Sarkar, Tapan K. (2006). History of wireless. USA: John Wiley and Sons. pp. 94, 291–308. ISBN 0-471-71814-9,.
- U.S. Patent 836,531
- "Electronic Valve - AWV,Diode, Type 6AU5GTA". Museum Victoria. Retrieved 9 January 2013.
- citation needed
- Current regulator diodes
- Jonscher, A. K. The physics of the tunnel diode. British Journal of Applied Physics 12 (Dec. 1961), 654–659.
- Dowdey, J. E., and Travis, C. M. An analysis of steady-state nuclear radiation damage of tunnel diodes. IRE Transactions on Nuclear Science 11, 5 (November 1964), 55–59.
- Classification of components
- "Component Construction". 2010-05-25. Retrieved 2010-08-06.
- Component Construction
- "Physics and Technology". 2010-05-25. Retrieved 2010-08-06.
- Fast Recovery Epitaxial Diodes (FRED) Characteristics - Applications - Examples
- S. M. Sze, Modern Semiconductor Device Physics, Wiley Interscience, ISBN 0-471-15237-4
- Protecting Low Current Loads in Harsh Electrical Environments
- "About JEDEC". Jedec.org. Retrieved 2008-09-22.
- "EDAboard.com". News.elektroda.net. 2010-06-10. Retrieved 2010-08-06.
- I.D.E.A. "Transistor Museum Construction Projects Point Contact Germanium Western Electric Vintage Historic Semiconductors Photos Alloy Junction Oral History". Semiconductormuseum.com. Retrieved 2008-09-22.
- John Ambrose Fleming (1919). The Principles of Electric Wave Telegraphy and Telephony. London: Longmans, Green. p. 550.
- Wintrich, Arendt; Nicolai, Ulrich; Tursky, Werner; Reimann, Tobias (2011). Application Manual 2011 (PDF) (2nd ed.). Nuremberg: Semikron. ISBN 978-3-938843-66-6.
- Diodes and Rectifiers - Chapter on All About Circuits
- Structure and Functional Behavior of PIN Diodes - PowerGuru
Interactive and animations
- Interactive Explanation of Semiconductor Diode, University of Cambridge
- Schottky Diode Flash Tutorial Animation | http://en.wikipedia.org/wiki/Shockley_diode_equation | 13 |
56 | In our everyday experience, waves are formed by motion within a medium. Waves come in different varieties. Ocean waves and sound waves roll outward from a source through the medium of water and air. A
violin string waves back and forth along its length, held in place at the two ends of the medium, which is the violin string. A jerk on a loose rope will send a wave rolling along its length.
In 1802, Thomas Young demonstrated fairly convincingly that light had the properties of a wave. He did this by shining light through two slits, and noting that an interference pattern formed on a projection screen. Interference patterns are one of the signature characteristics of waves: two wave crests meeting will double in size; two troughs meeting will double in depth; a crest and a trough meeting will cancel eachother out to flatness. As wave ripples cross, they create a recognizable pattern, exactly matching the pattern on Young's projection screen.
If light were made of particles, they would travel in straight lines from the source and hit the screen in two places.
If light traveled as waves, they would spread out, overlap, and form a distinctive pattern on the screen.
For most of the 19th century, physicists were convinced by Young's experiment that light was a wave. By implication, physicists were convinced that light must be traveling through some medium. The medium was dubbed "luminiferous ether," or just ether. Nobody knew exactly what it was, but the ether had to be there for the unshakably logical reason that without some medium, there could be no wave.
In 1887, Albert Michelson and E.W. Morley demonstrated fairly convincingly that there is no ether. This seemed to imply that there is no medium through which a light "wave" travels, and so there is no medium that can even form a light "wave." If this is true, how can we see evidence of waves at all? Ordinary waves of whatever sort require a medium in order to exist. The Michelson-Morley experiment should have had the effect of draining the bathtub: what kind of waves can you get with an empty bathtub? Yet the light waves still seemed to show up in the Young double slit experiment.
Without the medium, there is no wave. Only a *klunk*.
In 1905, Albert Einstein showed that the mathematics of light, and its observed constancy of speed, allowed one to make all necessary calculations without ever referring to any medium. He therefore did away with the ether as a concept in physics because it had no mathematical significance. He did not, however, explain how a wave can exist without a medium. From that point on, physicists simply put the question on the far back burner. As Michio Kaku puts it, "over the decades we [physicists] have simply gotten used to the idea that light can travel through a vacuum even if there is nothing to wave."
The matter was further complicated in the 1920s when it was shown that objects -- everything from electrons to the chair on which you sit -- exhibit exactly the same wave properties as light, and suffer from exactly the same lack of any medium.
The First Computer Analogy. One way to resolve this seeming paradox of waves without medium is to note that there remains another kind of wave altogether. A wave with which we are all familiar, yet which exists without any medium in the ordinary sense. This is the computer-generated wave. Let us examine a computer-generated sound wave.
Imagine the following set up. A musician in a recording studio plays a synthesizer, controlled by a keyboard. It is a digital synthesizer which uses an algorithm (programming) to create nothing more than a series of numbers representing what a sampling of points along the desired sound wave would look like if it were played by a "real" instrument. The synthesizer's output is routed to a computer and stored as a series of numbers. The numbers are burned into a disk as a series of pits that can be read by a laser -- in other words, a CD recording. The CD is shipped to a store. You buy the CD, bring it home, and put it in your home entertainment system, and press the play button. The "music" has traveled from the recording studio to yourliving room. Through what medium did the music wave travel? To a degree, you might say that it traveled as electricity through the wires from the keyboard to the computer. But you might just as well say it traveled by truck along the highway to the store. In fact, this "sound wave" never existed as anything more than a digital representation of a hypothetical sound wave which itself never existed. It is, first and last, a string of numbers. Therefore, although it will produce wave like effects when placed in your stereo, this wave never needed any medium other than the computer memory to spread itself all over the music loving world. As you can tell from your CD collection, computers are very good at generating, storing, and regenerating waves in this fashion.
Calculations from an equation [here, y = sin (x) + sin (2.5 x)] produce a string of numbers, i.e., 1, 1.5, 0.4, 0, 0.5, 1.1, 0.3, -1.1, -2, -1.1, 0.1, and 0.5.
These numbers can be graphed to create a picture of the wave that would be created by combining (interfering) the two simple sine waves.
By analogizing to the operations of a computer, we can do away with all of the conceptual difficulties that have plagued physicists as they try to describe how a light wave -- or a matter wave -- can travel or even exist in the absence of any medium.
B. Waves of calculation, not otherwise manifest, as though they really were differential equations
The more one examines the waves of quantum mechanics, the less they resemble waves in a medium. In the 1920s, Ernst Schrodinger set out a formula which could "describe" the wave-like behavior of all quantum units, be they light or objects. The formula took the form of an equation not so very different from the equations that describe sound waves or harmonics or any number of things with which Isaac Newton would have been comfortable. For a brief time, physicists sought to visualize these quantum waves as ordinary waves traveling through some kind of a medium (nobody knew what kind) which somehow carried the quantum properties of an object. Then Max Born pointed out something quite astonishing: the simple interference of these quantum waves did not describe the observed behaviors; instead, the waves had to be interfered and the mathematical results of the interference had to be further manipulated (by "squaring" them, i.e., by multiplying the results by themselves) in order to achieve the final probability characteristic of all quantum events. It is a two-step process, the end result of which requires mathematical manipulation. The process can not be duplicated by waves alone, but only by calculations based on numbers which cycled in the manner of waves.
From Born, the Schrodinger wave became known as a probability wave (although actually it is a cycling of potentialities which, when squared, yield a probability). Richard Feynman developed an elegant model for describing the amplitude (height or depth representing the relative potentiality) of the many waves involved in a quantum event, calculating the interference of all of these amplitudes, and using the final result to calculate a probability. However, Feynman disclaimed any insight into whatever physical process his system might be describing. Although his system achieved a result that was exactly and perfectly in accord with observed natural processes, to him it was nothing more than calculation. The reason was that, as far as Feynman or anybody else could tell, the underlying process itself was nothing more than calculation.
The Second Computer Analogy. A process that produces a result based on nothing more than calculation is an excellent way to describe the operations of a computer program. The two-step procedure of the Schrodinger equation and the Feynman system may be impossible to duplicate with physical systems, but for the computer it is trivial. That is what a computer does -- it manipulates numbers and calculates. (As we will discuss later, the computer must then interpret and display the result to imbue it with meaning that can be conveyed to the user.)
Wave summary. Quantum mechanics involves "waves" which cannot be duplicated or even approximated physically; but which easily can be calculated by mathematical formula and stored in memory, creating in effect a static map of the wave shape. This quality of something having the appearance and effect of a wave, but not the nature of a wave, is pervasive in quantum mechanics, and so is fundamental to all things in our universe. It is also an example of how things which are inexplicable in physical terms turn out to be necessary or convenient qualities of computer operations.
II. The Measurement Effect
A. "Collapse of the wave function" -- consciousness as mediator, as though the sensory universe was a display to the user
During the course of an observation of a quantum event, the wave-like nature of the quantum unit is not observed. The evidence for the existence of quantum waves is entirely inferential, derived from such phenomena as the interference pattern on Mr. Young's projection screen. After analyzing such a phenomenon, the conclusion is that the only thing that could cause such a pattern is a wave. ("It is as if two waves were interfering.") However, actual observation always reveals instead a particle. For example, as instruments were improved, it turned out that the interference pattern observed by Young was created not by a constant sloshing against the projection screen, but by one little hit at a time, randomly appearing at the projection screen in such a way that over time the interference pattern built up. "Particles" of light were being observed as they struck the projection screen; but the eventual pattern appeared to the eye, and from mathematical analysis, to result from a wave.
Particles of Light
This presents conceptual difficulties that are almost insurmountable as we attempt to visualize a light bulb (or laser or electron gun) emitting a particle at the source location, which immediately dissolves into a wave as it travels through the double slits, and which then reconstitutes itself into a particle at the projection screen, usually at a place where the (presumed) overlapping wave fronts radiating from the two slits reinforce each other. What is more, this is only the beginning of the conceptual difficulties with this phenomenon.
Investigating the mechanics of this process turns out to be impossible, for the reason that whenever we try to observe or otherwise detect a wave we obtain, instead, a particle. The very act of observation appears to change the nature of the quantum unit, according to conventional analysis. Variations on the double slit experiment provide the starkest illustration.
If we assume that quantum units are particles, it follows that the particle must travel from the emission source, through one slot or the other, and proceed to the projection screen. Therefore, we should be able to detect the particle mid-journey, i.e., at one slot or the other. The rational possibilities are that the particle would be detected at one slot, the other slot, or both slots.
Experiment shows that the particle in fact is detected always at one slot or the other slot, never at both slots, seeming to confirm that we are indeed dealing with particles.
Placing electron detectors at the slots.
However, a most mysterious thing happens when we detect these particles at the slots: the interference patterns disappears and is replaced by a clumping in line with the source and the slots. Thus, if we thought that some type of wave was traveling through this space in the absence of observation, we find instead a true particle upon observation -- a particle which behaves just like a particle is supposed to behave, to the point even of traveling in straight lines like a billiard ball.
Results if electrons are detected at the slots.
To further increase the mystery, it appears that the change from wave to particle takes place not upon mechanical interaction with the detecting device, but upon a conscious being's acquiring the knowledge of the results of the attempt at detection. Although not entirely free from doubt, experiment seems to indicate that the same experimental set up will yield different results (clumping pattern or interference pattern at the projection screen) depending entirely on whether the experimenter chooses to learn the results of the detection at the slits or not. This inexplicable change in behavior has been called the central mystery of quantum mechanics.
Results if electrons are NOT detected at the slots.
At the scientific level, the question is "how?" The conventional way of describing the discrepancy between analysis and observation is to say that the "wave function" is somehow "collapsed" during observation, yielding a "particle" with measurable properties. The mechanism of this transformation is completely unknown and, because the scientifically indispensable act of observation itself changes the result, it appears to be intrinsically and literally unknowable.
At the philosophical level, the question is "why?" Why should our acquisition of knowledge affect something which, to our way of thinking, should exist in whatever form it exists whether or not it is observed? Is there something special about consciousness that relates directly to the things of which we are conscious? If so, why should that be?
The computer analogy. As John Gribbin puts it, "nature seems to 'make the calculation' and then present us with an observed event." Both the "how" and the "why" of this process can be addressed through the metaphor of a computer which is programmed to project images to create an experience for the user, who is a conscious being.
The "how" is described structurally by a computer which runs a program. The program provides an algorithm for determining the position (in this example) of every part of the image, which is to say, every pixel that will be projected to the user. The mechanism for transforming the programming into the projection is the user interface. This can be analogized to the computer monitor, and the mouse or joystick or other device for viewing one part of the image or another. When the user chooses to view one part of the image, those pixels must be calculated and displayed; all other parts of the image remain stored in the computer as programming. Thus, the pixels being viewed must follow the logic of the projection, which is that they should move like particles across the screen. The programming representing the parts of the image not being displayed need not follow this logic, and may remain as formulas. Calculating and displaying any particular pixel is entirely a function of conveying information to the user, and it necessarily involves a "change" from the inchoate mathematical relationships represented by the formula to the specific pixel generated according to those relationships. The user can never "see" the programming, but by analysis can deduce its mathematical operation by careful observation of the manner in which the pixels are displayed. The algorithm does not collapse into a pixel; rather, the algorithm tells the monitor where and how to produce the pixel for display to the user according to which part of the image the user is viewing.
The "why" is problematical in the cosmic sense, but is easily stated within the limits of our computer metaphor. The programming produces images for the user because the entire set up was designed to do just that: to present images to a user (viewer) as needed by the user. The ultimate "why" depends on the motivation of the designer. In our experience, the maker of a video game seeks to engage the attention of the user to the end that the user will spend money for the product and generate profits for the designer. This seems an unlikely motivation for designing the universe simulation in
which we work and play.
B. Uncertainty and complementary properties, as though variables were being redefined and results calculated and recalculated according to an underlying formula
We have seen one aspect of the measurement effect, which is that measurement (or observation) appears to determine whether a quantum unit is displayed or projected to the user (as a "particle"), or whether instead the phenomenon remains inchoate, unobserved, behaving according to a mathematical algorithm (as a "wave"). There is another aspect of measurement that relates to the observed properties of the particle-like phenomenon as it is detected. This is the famous Heisenberg uncertainty principle.
As with all aspects of quantum mechanics, the uncertainty principle is not a statement of philosophy, but rather a mathematical model which is exacting and precise. That is, we can be certain of many quantum measurements in many situations, and we can be completely certain that our results will conform to quantum mechanical principles. In quantum mechanics, the "uncertainty principle" has a specific meaning, and it describes the relationship between two properties which are "complementary," that is, which are linked in a quantum mechanical sense (they "complement" each other, i.e., they are counterparts, each of which makes the other "complete").
The original example of complementary properties was the relationship between position and momentum. According to classical Newtonian physics and to common sense, if an object simply exists we should be able to measure both where it is and how fast it is moving. Measuring these two properties would allow us to predict where the object will be in the future. In practice, it turns out that both position and momentum cannot be exactly determined at the same moment -- a discovery that threw a monkey wrench into the clockwork predictability of the universe. Put simply, the uncertainty relationship is this: for any two complementary properties, any increase in the certainty of knowledge of one property will necessarily lead to a decrease in the certainty of knowledge of the other property.
The uncertainty principle was originally thought to be more statement of experimental error than an actual principle of any great importance. When scientists were measuring the location and the speed (or, more precisely, the momentum) of a quantum unit -- two properties which turn out to be complementary -- they found that they could not pin down both at once. That is, after measuring momentum, they would determine position; but then they found that the momentum had changed. The obvious explanation was that, in determining position, they had bumped the quantum unit and thereby changed its momentum. What they needed (so they thought) were better, less intrusive instruments. On closer inspection, however, this did not turn out to be the case. The measurements did not so much change the momentum, as they made the momentum less certain, less predictable. On remeasurement, the momentum might be the same, faster, or slower. What is more, the range of uncertainty of momentum increased in direct proportion to the accuracy of the measurement of location.
In 1925, Werner Heisenberg conducted a mathematical analysis of the position and momentum of quantum units. His results were surprising, in that they showed a mathematical incompatibility between the two properties. Heisenberg was able to state that there was a mathematical relationship between the properties p (position) and m (momentum), such that the more precise your knowledge of the one, the less precise your knowledge of the other. This "uncertainty" followed a formula which, itself, was quite certain. Heisenberg's mathematical formula accounted for the experimental results far, far more accurately than any notion of needing better equipment in the laboratory. It seems, then, that uncertainty in the knowledge of two complementary properties is more than a laboratory phenomenon -- it is a law of nature which can be expressed mathematically.
A good way to understand the uncertainty principle is to take the extreme cases. As we will discuss later on, a distinguishing feature of quantum units is that many of their properties come in whole units and whole units only. That is, many quantum properties have an either/or quality such that there is no in between: the quantum unit must be either one way or the other. We say that these properties are "quantized," meaning that the property must be one specific value (quantity) or another, but never anything else. When the uncertainty principle is applied to two complementary properties which are themselves quantized, the result is stark. Think about it. If a property is quantized, it can only be one way or the other; therefore, if we know anything about this property, we know everything about this property.
There are few, if any, properties in our day to day lives that can be only one way or the other, never in between. If we leave aside all quibbling, we might suggest the folk wisdom that "you can't be a little bit pregnant." A woman either is pregnant, or she is not pregnant. Therefore, if you know that the results of a reliable pregnancy test are positive, you know everything there is to know about her pregnancy property: she is pregnant. For a "complementary" property to pregnancy, let us use marital status. (In law, you are either married or not married, with important consequences for bigamy prosecutions.)
The logical consequence of knowing everything about one complementary property is that, as a law of nature, we then would know nothing about the other complementary property. For our example, we must imagine that, by learning whether a married woman is pregnant, we thereby no longer know whether she is married. We don't forget what we once knew; we just can no longer be certain that we will get any particular answer the next time we check on her marital status. The mathematical statement is that, by knowing pregnancy, you do not know whether she is married; and, by knowing marital status, you do not know whether she is pregnant. In order to make this statement true, if you once know her marital status, and you then learn her pregnancy status (without having you forget your prior knowledge of marital status), the very fact of her marital status must become random yes or no. A definite maybe.
What is controlling is your state of certainty about one property or the other. In just this way, the experimentalist sees an electron or some other quantum unit whose properties depend on the experimentalist's knowledge or certainty of some other complementary property.
A computer's data. If we cease to think of the quantum unit as a "thing," and begin to imagine it as a pixel, that is, as a display of information in graphic (or other sensory) form, it is far easier to conceive of how the uncertainty principle might work. The "properties" we measure are variables which are computed for the purpose of display, which is to say, for the purpose of giving the user knowledge via the interface. A computed variable will display according to the underlying algorithm each time it is computed, and while the algorithm remains stable, the results of a particular calculation can be made to depend on some other factor, including another variable.
It would be far easier to understand our changing impressions of the hypothetical woman if we knew that, although she appeared to be a person like ourselves, in fact she was a computer projection. As a computer projection, she could be pregnant or not pregnant, married or single, according to whatever rules the computer might be using to create her image.
Complementary properties are simply paired variables, the calculation of which depends on the state of the other. Perhaps they share a memory location, so that when one variable is calculated and stored, it displaces whatever value formerly occupied that location; then the other variable would have to be calculated anew the next time it was called for. In this way, or in some analogous way, we can see that the appearance of a property does not need to be related to the previously displayed value of the property, but only to the underlying algorithm.
III. The Identical/Interchangeable Nature of "Particles" and Measured Properties.
As though the "particles" were merely pictures of particles, like computer icons.
Quantum units of the same type are identical. Every electron is exactly the same as every other electron; every photon the same as every other photon; etc. How identical are they? So identical that Feynman was able seriously to propose that all the electrons and positrons in the universe actually are the same electron/positron, which merely has zipped back and forth in time so often that we observe it once for each of the billions of times it crosses our own time, so it seems like we are seeing billions of electrons. If you were to study an individual quantum unit from a collection, you would find nothing to distinguish it from any other quantum unit of the same type. Nothing whatsoever. Upon regrouping the quantum units, you could not, even in principle, distinguish which was the unit you had been studying and which was another.
The complete and utter sameness of each electron (or other quantum unit) has a number of consequences in physics. If the mathematical formula describing one electron is the same as that describing another electron,
then there is no method, even in principle, of telling which is which. This means, for example, that if you begin with two quantum electrons at positions A and B, and move them to positions C and D, you cannot state whether they traveled the paths A to C and B to D, or A to D and B to C. In such a situation, there is no way to identify the electron at an end position with one or the other of the electrons at a beginning position; therefore, you must allow for the possibility that each electron at A and B arrived at either C or D. This impacts on the math predicting what will happen in any given quantum situation and, as it turns out, the final probabilities agree with this interchangeable state of affairs.
The computer analogy. Roger Penrose has likened this sameness to the images produced by a computer. Imagine the letter "t." On the page you are viewing, the letter "t" appears many times. Every letter t is exactly like every other letter t. That is because on a computer, the letter t is produced by displaying a particular set of pixels on the screen. You could not, even in principle, tell one from the other because each is the identical image of a letter t. The formula for this image is buried in many layers of subroutines for displaying pixels, and the image does not change regardless of whether it is called upon to form part of the word "mathematical" or "marital".
Similarly, an electron does not change regardless of whether it is one of the two electrons associated with the helium atom, or one of the ninety-two electrons associated with the uranium atom. You could not, even in principle, tell one from another. The only way in this world to create such identical images is to use the same formula to produce the same image, over and over again whenever a display of the image is called for.
IV. Continuity and Discontinuity in Observed Behaviors
A. "Quantum leaps," as though there was
no time or space between quantum
In our experience, things move from one end to the other by going through the middle; they get from cold to hot by going through warm; they get from slow to fast by going through medium; and so on. Phenomena move from a lower state to a higher state in a ramp-like fashion -- continuously increasing until they reach the higher state. Even if the transition is quick, it still goes through all of the intermediate states before reaching the new, higher state.
In quantum mechanics, however, there is no transition at all. Electrons are in a low energy state on one observation, and in a higher energy state on the next; they spin one way at first, and in the opposite direction next. The processes proceed step-wise; but more than step-wise, there is no time or space in which the process exists in any intermediate state.
It is a difficult intellectual challenge to imagine a physical object that can change from one form into another form, or move from one place to another place, without going through any transition between the two states. Zeno's paradoxes offer a rigorously logical examination of this concept, with results that have frustrated analysts for millennia. In brief, Zeno appears to have "proved" that motion is not possible, because continuity (smooth transitions) between one state and the next implies an infinite number of transitions to accomplish any change whatsoever. Zeno's paradoxes imply that space and time are discontinuous -- discrete points and discrete instants with nothing in between, not even nothing. Yet the mind reels to imagine space and time as disconnected, always seeking to understand what lies between two points or two instants which are said to be separate.
The pre-computer analogy. Before computer animation there was the motion picture. Imagine that you are watching a movie. The motion on the screen appears to be smooth and continuous. Now, the projectionist begins to slow the projection rate. At some point, you begin to notice a certain jerkiness in the picture. As the projection rate slows, the jerkiness increases, and you are able to focus on one frame of the movie, followed by a blanking of the screen, followed by the next frame of the movie. Eventually, you see that the motion which seemed so smooth and continuous when projected at 30 frames per second or so is really only a series of still shots. There is no motion in any of the pictures, yet by rapidly flashing a series of pictures depicting intermediate positions of an actor or object, the effective illusion is one of motion.
The computer analogy. Computers create images in the same manner. First, they compose a still image and project it; then they compose the next still image and project that one. If the computer is quick enough, you do not notice any transition. Nevertheless, the computer's "time" is completely discrete, discontinuous, and digital. One step at a time.
Similarly, the computer's "space" is discrete, discontinuous, and digital. If you look closely at a computer monitor, you notice that it consists of millions of tiny dots, nothing more. A beautifully rendered image is made up of these dots.
The theory and architecture of computers lend themselves to a step-by-step approach to any and all problems. It appears that there is no presently conceived computer architecture that would allow anything but such a discrete, digitized time and space, controlled by the computer's internal clock ticking one operation at a time. Accordingly, it seems that this lack of continuity, so bizarre and puzzling as a feature of our natural world, is an inherent characteristic of a computer simulation.
B. The breakdown at zero, yielding
infinities, as though the universe was being run by a computer clock
on a coordinate grid
Quantum theory assumes that space and time are continuous. This is simply an assumption, not a necessary part of the theory. However, this assumption has raised some difficulties when performing calculations of quantum mechanical phenomena. Chief among these is the recurring problem of infinities.
In quantum theory, all quantum units which appear for the purpose of measurement are conceived of as dimensionless points. These are assigned a place on the coordinate grid, described by the three numbers of height, depth, and width as we have seen, but they are assigned only these three numbers. By contrast, if you consider any physical object, it will have some size, which is to say it will have its own height, width, and depth. If you were to exactly place such a physical object, you would have to take into account its own size, and to do so you would have to assign coordinates to each edge of the object.
When physicists consider quantum units as particles, there does not seem to be any easy way to determine their outer edges, if, in fact, they have any outer edges. Accordingly, quantum "particles" are designated as simple points, without size and, therefore, without edges. The three coordinate numbers are then sufficient to locate such a pointlike particle at a single point in space.
The difficulty arises when the highly precise quantum calculations are carried out all the way down to an actual zero distance (which is the size of a dimensionless point -- zero height, zero width, zero depth). At that point [sic], the quantum equations return a result of infinity, which is as meaningless to the physicist as it is to the philosopher. This result gave physicists fits for some twenty years (which is not really so long when you consider that the same problem had been giving philosophers fits for some twenty-odd centuries). The quantum mechanical solution was made possible when it was discovered that the infinities disappeared if one stopped at some arbitrarily small distance -- say, a billionth-of-a-billionth-of-a-billionth of an inch -- instead of proceeding all the way to an actual zero. One problem remained, however, and that was that there was no principled way to determine where one should stop. One physicist might stop at a billionth-of-a-billionth-of-a-billionth of an inch, and another might stop at only a thousandth-of-a-billionth of-a-billionth of an inch. The infinities disappeared either way. The only requirement was to stop somewhere short of the actual zero point. It seemed much too arbitrary. Nevertheless, this mathematical quirk eventually gave physicists a method for doing their calculations according to a process called "renormalization," which allowed them to keep their assumption that an actual zero point exists, while balancing one positive infinity with another negative infinity in such a way that all of the infinities cancel each other out, leaving a definite, useful number.
In a strictly philosophical mode, we might suggest that all of this is nothing more than a revisitation of Zeno's Achilles paradox of dividing space down to infinity. The philosophers couldn't do it, and neither can the physicists. For the philosopher, the solution of an arbitrarily small unit of distance -- any arbitrarily small unit of distance -- is sufficient for the resolution of the paradox. For the physicist, however, there should appear some reason for choosing one small distance over another. None of the theoretical models have presented any compelling reason for choosing any particular model as the "quantum of length." Because nosuch reason appears, the physicist resorts to the "renormalization" process, which is profoundly dissatisfying to both philosopher and physicist. Richard Feynman, who won a Nobel prize for developing the renormalization process, himself describes the procedure as "dippy" and "hocus-pocus." The need to resort to such a mathematical sleight-of-hand to obtain meaningful results in quantum calculations is frequently cited as the most convincing piece of evidence that quantum theory -- for all its precision and ubiquitous application -- is somehow lacking, somehow missing something. It may be that one missing element is quantized space -- a shortest distance below which there is no space, and below which one need not calculate. The arbitrariness of choosing the distance would be no more of a theoretical problem than the arbitrariness of the other fundamental constants of nature -- the speed of light, the quantum of action, and the gravitational constant. None of these can be derived from theory, but are simply observed to be constant values. Alas, this argument will not be settled until we can make far more accurate measurements than are possible today.
Quantum time. If space is quantized, then time almost surely must be quantized also. This relationship is implied by the theory of relativity, which supposes that time and space are so interrelated as to be practically the same thing. Thus, relativity is most
commonly understood to imply that space and time cannot be thought of in isolation from each other; rather, we must analyze our world in terms of a single concept -- "space-time." Although the theory ofrelativity is largely outside the scope of this essay, the reader can see from Zeno's paradoxes how space and time are intimately related in the analysis of motion. For the moment, I will only note that the theory of relativity significantly extends this view, to the point where space and time may be considered two sides of the same coin.
The idea of "quantized" time has the intellectual virtue of consistency within the framework of quantum mechanics. That is, if the energies of electron units are quantized, and the wavelengths of light are quantized, and so many other phenomena are quantized, why not space and time? Isn't it easier to imagine how the "spin" of an electron unit can change from up to down without going through anything in the middle if we assume a quantized time? With quantized time, we may imagine that the change in such an either/or property takes place in one unit of time, and that, therefore, there is no "time" at which the spin is anywhere in the middle. Without quantized time, it is far more difficult to eliminate the intervening spin directions.
Nevertheless, the idea that time (as well as space) is "quantized," i.e., that time comes in individual units, is still controversial. The concept has been seriously proposed on many occasions, but most current scientific theories do not depend on the nature of time in this sense. About all scientists can say is that if time is not continuous, then the changes are taking place too rapidly to measure, and too rapidly to make any detectable difference in any experiment that they have dreamed up. The theoretical work that has been done on the assumption that time may consist of discontinuous jumps often focuses on the most plausible scale, which is related to the three fundamental constants of nature -- the speed of light, the quantum of action, and the gravitational constant. This is sometimes called the "Planck scale," involving the "Planck time," after the German physicist Max Planck, who laid much of the foundation of quantum mechanics through his study of minimum units in nature. On this theoretical basis, the pace of time would be around 10-44 seconds. That is one billionth-of-a-billionth-of-a-billionth-of-a-billionth of a second. And that is much too quick to measure by today's methods, or by any method that today's scientists are able to conceive of, or even hope for.
Mixing philosophy, science, time, and space. We see that the branch of physics known as relativity has been remarkably successful in its conclusion that space and time are two sides of the same coin, and should properly be thought of as a single entity: space-time. We see also that the philosophical logic of Zeno's paradoxes has always strongly implied that both space and time are quantized at some smallest, irreducible level, but that this conclusion has long been resisted because it did not seem to agree with humanexperience in the "real world." Further, we see that quantum mechanics has both discovered the ancientparadoxes anew in its mathematics, and provided some evidence of quantized space and time in its essentialexperimental results showing that "physical" processes jump from one state to the next without transition. The most plausible conclusion to be drawn from all of this is that space and time are, indeed, quantized. That is, there is some unit of distance or length which can be called "1," and which admits no fractions; and, similarly, there is some unit of time which can be called "1," and which also admits no fractions.
Although most of the foregoing is mere argument, it is compelling in its totality, and it is elegant in its power to resolve riddles both ancient and modern. Moreover, if we accept the quantization of space and time as a basic fact of the structure of our universe, then we may go on to consider how both of these properties happen to be intrinsic to the operations of a computer, as discussed above at Point IV(A).
As though all calculations were in
the CPU, regardless of the location
of the pixels on the screen.
A second key issue in quantum mechanics is the phenomenon of connectedness -- the ancient concept that all things are one -- because science has come increasingly to espouse theories that are uncannily related to this notion. In physics, this phenomenon is referred to as non-locality.
The essence of a local interaction is direct contact -- as basic as a punch in the nose. Body A affects body B locally when it either touches B or touches something else that touches B. A gear train is a typical local mechanism. Motion passes from one gear wheel to another in an unbroken chain. Break the chain by taking out a single gear and the movement cannot continue. Without something there to mediate it, a local interaction cannot cross a gap.
On the other hand, the essence of non locality is unmediated action-at-a-distance. A non-local interaction jumps from body A to body B without touching anything in between. Voodoo injury is an example of a non-local interaction. When a voodoo practitioner sticks a pin in her doll, the distant target is (supposedly) instantly wounded, although nothing actually travels from doll to victim. Believers in voodoo claim that an action here causes an effect there; that's all there is to it. Without benefit of mediation, a non-local interaction effortlessly flashes across the void.
Even "flashes across the void" is a bit misleading, because "flashing" implies movement, however quick, and "across" implies distance traveled, however empty. In fact, non-locality simply does away with speed and distance, so that the cause and effect simply happen. Contrary to common sense or scientific sensibility, it appears that under certain circumstances an action here on earth can have immediate consequences across the world, or on another star, or clear across the universe. There is no apparent transfer of energy at any speed, only an action here and a consequence there.
Non-locality for certain quantum events was theorized in the 1930s as a result of the math. Many years were wasted (by Einstein, among others) arguing that such a result was absurd and could not happen regardless of what the math said. In the 1960s, the theory was given a rigorous mathematical treatment by John S. Bell, who showed that if quantum effects were "local" they would result in one statistical distribution, and if "non-local" in another distribution. In the 1970s and '80s, the phenomenon was demonstrated, based on Bell's theorem, by the actual statistical distribution of experiments. For those die-hard skeptics who distrust statistical proofs, the phenomenon appears recently to have been demonstrated directly at the University of Innsbruck.
More than any of the bizarre quantum phenomena observed since 1900, the phenomenon of non-locality caused some serious thought to be given to the question, "What is reality?" The question had been nagging since the 1920s, when the Copenhagen school asserted, essentially, that our conception of reality had to stop with what we could observe; deeper than that we could not delve and, therefore, we could never determine experimentally why we observe what we observe. The experimental proof of non-locality added nothing to this strange statement, but seemed to force the issue. The feeling was that if our side of the universe could affect the other side of the universe, then those two widely separated places must somehow be connected. Alternative explanations necessarily involved signals traveling backward in time so that the effect "causes the cause," which seemed far too contrived for most scientists' tastes. Accordingly, it was fair to ask whether apparent separations in space and time -- I'm in the living room, you're in the den -- are fundamentally "real"; or whether, instead, they are somehow an illusion, masking a deeper reality in which all things are one, sitting right on top of each other, always connected one to another and to all. This sounds suspiciously like mysticism, and the similarity of scientific and mystical concepts led to some attempts to import Eastern philosophy into Western science. Zukav, in particular, wants desperately to find a direct connection between science and Buddhism, but he would concede that the link remains to be discovered.
Note that the experimental results had been predicted on the basis of the mathematical formalism of quantum mechanics, and not from any prior experiments. That is, the formal mathematical description of two quantum units in certain circumstances implied that their properties thereafter would be connected regardless of separation in space or time (just as x + 2 = 4 implies that x = 2). It then turned out that these properties are connected regardless of separation in space or time. The experimentalists in the laboratory had confirmed that where the math can be manipulated to produce an absurd result, the matter and energy all around us obligingly will be found to behave in exactly that absurd manner. In the case of non-locality, the behavior is uncomfortably close to magic.
The computer analogy. The non-locality which appears to be a basic feature of our world also finds an analogy in the same metaphor of a computer simulation. In terms of cosmology, the scientific question is, "How can two particles separated by half a universe be understood as connected such that they interact as though they were right on top of each other?" If we analogize to a computer simulation, the question would be, "How can two pictures at the far corners of the screen be understood as connected such that the distance between them is irrelevant?"
In fact, the measured distance between any two pixels (dots) on the monitor's display turns out to be entirely irrelevant, since both are merely the products of calculations carried out in the bowels of the computer as directed by the programming. The pixels may be as widely separated as you like, but the programming generating them is forever embedded in the computer's memory in such a way that -- again speaking quite literally -- the very concept of separation in space and time of the pixels has no meaning whatsoever for the stored information.
VI. The Relationship of Observed
Phenomena to the Mathematical Formalism
As though physical manifestations
themselves were being produced by
a mathematical formula.
Perhaps the most striking aspect of quantum theory is the relationship of all things to the math, as with the phenomenon of non-locality discussed above, which occurs in nature, so it seems, because that is the way the equations calculate. Even though the mathematical formulas were initially developed to describe the behavior of universe, these formulas turn out to govern the behavior of the universe with an exactitude that defies our concept of mathematics. As Nick Herbert puts it, "Whatever the math does on paper, the quantumstuff does in the outside world." That is, if the math can be manipulated to produce some absurd result, it will always turn out that the matter and energy around us actually behave in exactly that absurd manner when we look closely enough. It is as though our universe is being produced by the mathematical formulas. The backwards logic implied by quantum mechanics, where the mathematical formalism seems to be more "real" than the things and objects of nature, is unavoidable. In any conceptual conflict between what a mathematical equation can obtain for a result, and what a real object actually could do, the quantum mechanical experimentalresults always will conform to themathematical prediction.
Quantum theory is rooted in statistics, and such reality conflicts often arise in statistics. For example, the math might show that a "statistically average" American family has 2.13 children, even though we know that a family of real human beings must have a whole number of children. In our experience, we would never find such a statistically average family regardless of the math, because there simply is no such thing as 13/100ths of a child. The math is entirely valid, but it must yield to the census-taker's whole-child count when we get down to examining individual families. In quantum mechanics, however, the math will prevail -- as though the statistics were drawn up in advance and all American families were created equally with exactly 2.13 children, nevermind that we cannot begin to conceive of such a family. To the mathematician, these two situations are equivalent, because either way the average American family ends up with 2.13 children. But the quantum mechanical relationship of the math to the observation does not make any sense to us because in our world view, numbers are just symbols representing something with independent existence.
Mr. Herbert states that, "Quantum theory is a method of representing quantumstuff mathematically: a model of the world executed in symbols." Since quantum theory describes the world perfectly -- so perfectly that its symbolic, mathematical predictions always prevail over physical insight -- the equivalence between quantum symbolism and universal reality must be more than an oddity: it must be the very nature of reality.
This is the point at which we lose our nerve; yet the task for the Western rationalist is to find a mechanical model from our experience corresponding to a "world executed in symbols."
The final computer analogy. An example which literally fits this description is the computer simulation, which is a graphic representation created by executing programming code. The programming code itself consists of nothing but symbols, such as 0 and 1. Numbers, text, graphics and anything else you please are coded by unique series of numbers. These symbolic codes have no meaning in themselves, but arbitrarily are assigned values which have significance according to the operations of the computer. The symbols are manipulated according to the various step-by-step sequences (algorithms) by which the programming instructs the computer how to create the graphic representation. The picture presented on-screen to the user is a world executed in colored dots; the computer's programming is a world (the same world) executed in symbols. Anyone who has experienced a computer crash knows that the programming (good or bad) governs the picture, and not vice versa. All of this forms a remarkably tight analogy to the relationship between the quantum math on paper, and the behavior of the "quantumstuff" in the outside world.
Great Neck, New York
May 2, 1999
|1||M. Kaku, Hyperspace, at 8n.||Back|
|2||J. Gribbin, In Search of Schrodinger's Cat, 111.||Back|
|3||J. Gleick, Genius, 122.||Back|
|4||R. Penrose, The Emperor's New Mind, 25-26. See also D. Eck, The Most Complex
|5||N. Herbert, Quantum Reality, 212-13.||Back|
|6||"Entangled Trio to Put Nonlocality to the Test," Science 283, 1429 (Mar. 5, 1999).||Back|
|7||N. Herbert at 41.||Back|
|8||N. Herbert at 41.||Back|
| Back to Top |
| Back to The Notebook of Philosophy & Physics | | http://www.bottomlayer.com/bottom/argument/Argument4.html | 13 |
53 | ||This article needs additional citations for verification. (January 2008)|
The Doppler effect (or Doppler shift), named after the Austrian physicist Christian Doppler, who proposed it in 1842 in Prague, is the change in frequency of a wave (or other periodic event) for an observer moving relative to its source. It is commonly heard when a vehicle sounding a siren or horn approaches, passes, and recedes from an observer. The received frequency is higher (compared to the emitted frequency) during the approach, it is identical at the instant of passing by, and it is lower during the recession.
The relative changes in frequency can be explained as follows. When the source of the waves is moving toward the observer, each successive wave crest is emitted from a position closer to the observer than the previous wave. Therefore each wave takes slightly less time to reach the observer than the previous wave. Therefore the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are travelling, the distance between successive wave fronts is reduced; so the waves "bunch together". Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the frequency. The distance between successive wave fronts is increased, so the waves "spread out".
For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect may therefore result from motion of the source, motion of the observer, or motion of the medium. Each of these effects is analyzed separately. For waves which do not require a medium, such as light or gravity in general relativity, only the relative difference in velocity between the observer and the source needs to be considered.
Doppler first proposed the effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels" (On the coloured light of the binary stars and some other stars of the heavens). The hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, and lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848 (in France, the effect is sometimes called "effet Doppler-Fizeau" but that name was not adopted by the rest of the world as Fizeau's discovery was six years after Doppler's proposal). In Britain, John Scott Russell made an experimental study of the Doppler effect (1848).
In classical physics, where the speeds of source and the receiver relative to the medium are lower than the velocity of waves in the medium, the relationship between observed frequency and emitted frequency is given by:
- is the velocity of waves in the medium;
- is the velocity of the receiver relative to the medium; positive if the receiver is moving towards the source (and negative in the other direction);
- is the velocity of the source relative to the medium; positive if the source is moving away from the receiver (and negative in the other direction).
The frequency is decreased if either is moving away from the other.
The above formula assumes that the source is either directly approaching or receding from the observer. If the source approaches the observer at an angle (but still with a constant velocity), the observed frequency that is first heard is higher than the object's emitted frequency. Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is closest to the observer, and a continued monotonic decrease as it recedes from the observer. When the observer is very close to the path of the object, the transition from high to low frequency is very abrupt. When the observer is far from the path of the object, the transition from high to low frequency is gradual.
If the speeds and are small compared to the speed of the wave, the relationship between observed frequency and emitted frequency is approximately
|Observed frequency||Change in frequency|
- is the velocity of the receiver relative to the source: it is positive when the source and the receiver are moving towards each other.
The frequency of the sounds that the source emits does not actually change. To understand what happens, consider the following analogy. Someone throws one ball every second in a man's direction. Assume that balls travel with constant velocity. If the thrower is stationary, the man will receive one ball every second. However, if the thrower is moving towards the man, he will receive balls more frequently because the balls will be less spaced out. The inverse is true if the thrower is moving away from the man. So it is actually the wavelength which is affected; as a consequence, the received frequency is also affected. It may also be said that the velocity of the wave remains constant whereas wavelength changes; hence frequency also changes.
If a moving source is emitting waves with an actual frequency , then an observer stationary relative to the medium detects waves with a frequency given by
A similar analysis for a moving observer and a stationary source yields the observed frequency:
These can be generalized into the equation that was presented in the previous section.
An interesting effect was predicted by Lord Rayleigh in his classic book on sound: if the source is moving at twice the speed of sound, a musical piece emitted by that source would be heard in correct time and tune, but backwards.
The siren on a passing emergency vehicle will start out higher than its stationary pitch, slide down as it passes, and continue lower than its stationary pitch as it recedes from the observer. Astronomer John Dobson explained the effect thus:
- "The reason the siren slides is because it doesn't hit you."
In other words, if the siren approached the observer directly, the pitch would remain constant until the vehicle hit him, and then immediately jump to a new lower pitch. Because the vehicle passes by the observer, the radial velocity does not remain constant, but instead varies as a function of the angle between his line of sight and the siren's velocity:
where is the angle between the object's forward velocity and the line of sight from the object to the observer.
The Doppler effect for electromagnetic waves such as light is of great use in astronomy and results in either a so-called redshift or blueshift. It has been used to measure the speed at which stars and galaxies are approaching or receding from us, that is, the radial velocity. This is used to detect if an apparently single star is, in reality, a close binary and even to measure the rotational speed of stars and galaxies.
The use of the Doppler effect for light in astronomy depends on our knowledge that the spectra of stars are not continuous. They exhibit absorption lines at well defined frequencies that are correlated with the energies required to excite electrons in various elements from one level to another. The Doppler effect is recognizable in the fact that the absorption lines are not always at the frequencies that are obtained from the spectrum of a stationary light source. Since blue light has a higher frequency than red light, the spectral lines of an approaching astronomical light source exhibit a blueshift and those of a receding astronomical light source exhibit a redshift.
Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s (BD-15°4041, also known as LHS 52, 81.7 light-years away) and -260 km/s (Woolley 9722, also known as Wolf 1106 and LHS 64, 78.2 light-years away). Positive radial velocity means the star is receding from the Sun, negative that it is approaching.
Temperature measurement
Another use of the Doppler effect, which is found mostly in plasma physics and astronomy, is the estimation of the temperature of a gas (or ion temperature in a plasma) which is emitting a spectral line. Due to the thermal motion of the emitters, the light emitted by each particle can be slightly red- or blue-shifted, and the net effect is a broadening of the line. This line shape is called a Doppler profile and the width of the line is proportional to the square root of the temperature of the emitting species, allowing a spectral line (with the width dominated by the Doppler broadening) to be used to infer the temperature.
The Doppler effect is used in some types of radar, to measure the velocity of detected objects. A radar beam is fired at a moving target — e.g. a motor car, as police use radar to detect speeding motorists — as it approaches or recedes from the radar source. Each successive radar wave has to travel farther to reach the car, before being reflected and re-detected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. In some situations, the radar beam is fired at the moving car as it approaches, in which case each successive wave travels a lesser distance, decreasing the wavelength. In either situation, calculations from the Doppler effect accurately determine the car's velocity. Moreover, the proximity fuze, developed during World War II, relies upon Doppler radar to detonate explosives at the correct time, height, distance, etc.
Medical imaging and blood flow measurement
An echocardiogram can, within certain limits, produce accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, any abnormal communications between the left and right side of the heart, any leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output. Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements.
Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift (when the received signal arrives).
Velocity measurements of blood flow are also used in other fields of medical ultrasonography, such as obstetric ultrasonography and neurology. Velocity measurement of blood flow in arteries and veins based on Doppler effect is an effective tool for diagnosis of vascular problems like stenosis.
Flow measurement
Instruments such as the laser Doppler velocimeter (LDV), and acoustic Doppler velocimeter (ADV) have been developed to measure velocities in a fluid flow. The LDV emits a light beam and the ADV emits an ultrasonic acoustic burst, and measure the Doppler shift in wavelengths of reflections from particles moving with the flow. The actual flow is computed as a function of the water velocity and phase. This technique allows non-intrusive flow measurements, at high precision and high frequency.
Velocity profile measurement
Developed originally for velocity measurements in medical applications (blood flow), Ultrasonic Doppler Velocimetry (UDV) can measure in real time complete velocity profile in almost any liquids containing particles in suspension such as dust, gas bubbles, emulsions. Flows can be pulsating, oscillating, laminar or turbulent, stationary or transient. This technique is fully non-invasive.
Satellite communication
Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed multiple times during transmission, is used so the satellite receives a constant frequency signal.
Underwater acoustics
In military applications the Doppler shift of a target is used to ascertain the speed of a submarine using both passive and active sonar systems. As a submarine passes by a passive sonobuoy, the stable frequencies undergo a Doppler shift, and the speed and range from the sonobuoy can be calculated. If the sonar system is mounted on a moving ship or another submarine, then the relative velocity can be calculated.
The Leslie speaker, associated with and predominantly used with the Hammond B-3 organ, takes advantage of the Doppler Effect by using an electric motor to rotate an acoustic horn around a loudspeaker, sending its sound in a circle. This results at the listener's ear in rapidly fluctuating frequencies of a keyboard note.
Vibration measurement
A laser Doppler vibrometer (LDV) is a non-contact method for measuring vibration. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.
Doppler effect sound sample
|Problems listening to this file? See media help.|
See also
- Relativistic Doppler effect
- Fizeau experiment
- Inverse Doppler effect
- Photoacoustic Doppler effect
- Differential Doppler effect
- Rayleigh fading
- Doppler, C. J. (1842). Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels (About the coloured light of the binary stars and some other stars of the heavens). Publisher: Abhandlungen der Königl. Böhm. Gesellschaft der Wissenschaften (V. Folge, Bd. 2, S. 465-482) [Proceedings of the Royal Bohemian Society of Sciences (Part V, Vol 2)]; Prague: 1842 (Reissued 1903). Some sources mention 1843 as year of publication because in that year the article was published in the Proceedings of the Bohemian Society of Sciences. Doppler himself referred to the publication as "Prag 1842 bei Borrosch und André", because in 1842 he had a preliminary edition printed that he distributed independently.
- Alec Eden The search for Christian Doppler,Springer-Verlag, Wien 1992. Contains a facsimile edition with an English translation.
- Buys Ballot (1845). "Akustische Versuche auf der Niederländischen Eisenbahn, nebst gelegentlichen Bemerkungen zur Theorie des Hrn. Prof. Doppler (in German)". Annalen der Physik und Chemie 11: 321–351.
- Fizeau: "Acoustique et optique". Lecture, Société Philomathique de Paris, 29 December 1848. According to Becker(pg. 109), this was never published, but recounted by M. Moigno(1850): "Répertoire d'optique moderne" (in French), vol 3. pp 1165-1203 and later in full by Fizeau, "Des effets du mouvement sur le ton des vibrations sonores et sur la longeur d'onde des rayons de lumière"; [Paris, 1870]. Annales de Chimie et de Physique, 19, 211-221.
- Scott Russell, John (1848). "On certain effects produced on sound by the rapid motion of the observer". Report of the Eighteen Meeting of the British Association for the Advancement of Science (John Murray, London in 1849) 18 (7): 37–38. Retrieved 2008-07-08.
- Rosen, Joe; Gothard, Lisa Quinn (2009). Encyclopedia of Physical Science. Infobase Publishing. p. 155. ISBN 0-8160-7011-3., Extract of page 155
- Strutt (Lord Rayleigh), John William (1896). In MacMillan & Co. The Theory of Sound 2 (2 ed.). p. 154.
- Evans, D. H.; McDicken, W. N. (2000). Doppler Ultrasound (Second ed.). New York: John Wiley and Sons. ISBN 0-471-97001-8.
- Qingchong, Liu (1999), "Doppler measurement and compensation in mobile satellite communications systems", Military Communications Conference Proceedings / MILCOM 1: 316–320
Further reading
- "Doppler and the Doppler effect", E. N. da C. Andrade, Endeavour Vol. XVIII No. 69, January 1959 (published by ICI London). Historical account of Doppler's original paper and subsequent developments.
- Adrian, Eleni (24 June 1995). "Doppler Effect". NCSA. Retrieved 2008-07-13.
|Wikimedia Commons has media related to: Doppler effect|
- Doppler Effect, ScienceWorld
- Java simulation of Doppler effect
- Doppler Shift for Sound and Light at MathPages
- The Doppler Effect and Sonic Booms (D.A. Russell, Kettering University)
- Video Mashup with Doppler Effect videos
- Wave Propagation from John de Pillis. An animation showing that the speed of a moving wave source does not affect the speed of the wave.
- EM Wave Animation from John de Pillis. How an electromagnetic wave propagates through a vacuum
- Doppler Shift Demo - Interactive flash simulation for demonstrating Doppler shift.
- Interactive applets at Physics 2000 | http://en.wikipedia.org/wiki/Doppler_shift | 13 |
73 | Trigonometry/For Enthusiasts/The CORDIC Algorithm
What it is
CORDIC (for COordinate Rotation DIgital Computer) is a simple and efficient algorithm to calculate trigonometric functions. The only operations it requires are
- Multiplications and division by two and
- Table lookup (a table with 64 numbers in it is enough for all the cosines and sines that a handheld calculator can calculate).
Because computers use binary arithmetic internally, multiplications and divisions by two are quick and easy to do. Consequently the CORDIC algorithm allows trigonometric functions to be calculated efficiently with a relatively simple CPU.
CORDIC is particularly well-suited for handheld calculators, an application for which cost (e.g., chip gate count has to be minimized) is much more important than is speed. Also the CORDIC subroutines for trigonometric and hyperbolic functions (described in Trigonometry Book 2) can share most of their code.
Mode of operation
||This section needs to be modified to avoid using matrices and explain rotations (with change in length) directly, since this is in Book 1.|
CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in rotation mode to calculate the sine and cosine of an angle, and assumes the desired angle is given in radians and represented in a fixed point format. To determine the sine or cosine for an angle , the y or x coordinate of a point on the unit circle corresponding to the desired angle must be found. Using CORDIC, we would start with the vector :
In the first iteration, this vector would be rotated 45° counterclockwise to get the vector . Successive iterations will rotate the vector in one or the other direction by steps decreasing in size, until the desired angle has been achieved. Step i size is Artg(1/(2(i-1))) where i 1,2,3,….
More formally, every iteration calculates a rotation, which is performed by multiplying the vector with the rotation matrix :
The rotation matrix R is given by:
Using the following two trigonometric identities
the rotation matrix becomes:
The expression for the rotated vector then becomes:
where and are the components of . Restricting the angles so that takes on the values the multiplication with the tangent can be replaced by a division by a power of two, which is efficiently done in digital computer hardware using a bit shift. The expression then becomes:
and can have the values of −1 or 1 and is used to determine the direction of the rotation: if the angle is positive then is 1, otherwise it is −1.
We can ignore in the iterative process and then apply it afterward by a scaling factor:
which is calculated in advance and stored in a table, or as a single constant if the number of iterations is fixed. This correction could also be made in advance, by scaling and hence saving a multiplication. Additionally it can be noted that
to allow further reduction of the algorithm's complexity. After a sufficient number of iterations, the vector's angle will be close to the wanted angle . For most ordinary purposes, 40 iterations (n = 40) is sufficient to obtain the correct result to the 10th decimal place.
The only task left is to determine if the rotation should be clockwise or counterclockwise at every iteration (choosing the value of ). This is done by keeping track of how much we rotated at every iteration and subtracting that from the wanted angle, and then checking if is positive and we need to rotate clockwise or if it is negative we must rotate counterclockwise in order to get closer to the wanted angle .
The values of must also be precomputed and stored. But for small angles, in fixed point representation, reducing table size.
As can be seen in the illustration above, the sine of the angle is the y coordinate of the final vector , while the x coordinate is the cosine value.
- See Cordic on wikipedia for more information, including a software implementation.
The primary use of the CORDIC algorithms in a hardware implementation is to avoid time-consuming complex multipliers. The computation of phase for a complex number can be easily implemented in a hardware description language using only adder and shifter circuits bypassing the bulky complex number multipliers. Fabrication techniques have steadily improved, and complex numbers can now be handled directly without too high a cost in time, power consumption, or excessive die space, so the use of CORDIC techniques is not as critical in many applications as they once were.
CORDIC is part of the class of "shift-and-add" algorithms, as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm, which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle (in radians) by computing the exponential of which is The BKM algorithm is slightly more complex than CORDIC, but has the advantage that it does not need a scaling factor (K).
The modern CORDIC algorithm was first described in 1959 by Jack E. Volder. It was developed at the aeroelectronics department of Convair to replace the analog resolver in the B-58 bomber's navigation computer.
Although CORDIC is similar to mathematical techniques published by Henry Briggs as early as 1624, it is optimized for low complexity finite state CPUs.
John Stephen Walther at Hewlett-Packard further generalized the algorithm, allowing it to calculate hyperbolic and exponential functions, logarithms, multiplications, divisions, and square roots.
Originally, CORDIC was implemented using the binary numeral system. In the 1970s, decimal CORDIC became widely used in pocket calculators, most of which operate in binary-coded-decimal (BCD) rather than binary.
CORDIC is generally faster than other approaches when a hardware multiplier is unavailable (e.g., in a microcontroller based system), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA).
On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, CORDIC algorithm is used extensively for various biomedical applications, especially in FPGA implementations.
Many older systems with integer only CPUs have implemented CORDIC to varying extents as part of their IEEE Floating Point libraries. As most modern general purpose CPUs have floating point registers with common operations such as add, subtract, multiply, divide, sin, cos, square root, log10, natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time constraint software applications would need to consider using CORDIC.
Volder was inspired by the following formula in the 1946 edition of the CRC Handbook of Chemistry and Physics:
Some of the prominent early applications of CORDIC were in the Convair navigation computers CORDIC I to CORDIC III, the Hewlett-Packard HP-9100 and HP-35 calculators, the Intel 80x87 coprocessor series until Intel 80486, and Motorola 68881.
Decimal CORDIC was first suggested by Hermann Schmid and Anthony Bogacki.
- J.-M. Muller, Elementary Functions: Algorithms and Implementation, 2nd Edition (Birkhäuser, Boston, 2006), p. 134.
- J. E. Volder, "The Birth of CORDIC", J. VLSI Signal Processing 25, 101 (2000).
- J. S. Walther, "The Story of Unified CORDIC", J. VLSI Signal Processing 25, 107 (2000).
- D. Cochran, "Algorithms and Accuracy in the HP 35", Hewlett Packard J. 23, 10 (1972).
- R. Nave, "Implementation of Transcendental Functions on a Numerics Processor", Microprocessing and Microprogramming 11, 221 (1983).
- H. Schmid and A. Bogacki, "Use Decimal CORDIC for Generation of Many Transcendental Functions", EDN Magazine, February 20, 1973, p. 64.
- Jack E. Volder, The CORDIC Trigonometric Computing Technique, IRE Transactions on Electronic Computers, pp330-334, September 1959
- Daggett, D. H., Decimal-Binary conversions in CORDIC, IRE Transactions on Electronic Computers, Vol. EC-8 #5, pp335-339, IRE, September 1959
- John S. Walther, A Unified Algorithm for Elementary Functions, Proc. of Spring Joint Computer Conference, pp379-385, May 1971
- J. E. Meggitt, Pseudo Division and Pseudo Multiplication Processes, IBM Journal, April 1962
- Vladimir Baykov, Problems of Elementary Functions Evaluation Based on Digit by Digit (CORDIC) Technique, PhD thesis, Leningrad State Univ. of Electrical Eng., 1972
- Schmid, Hermann, Decimal computation. New York, Wiley, 1974
- V.D.Baykov,V.B.Smolov, Hardware implementation of elementary functions in computers, Leningrad State University, 1975, 96p.*Full Text
- Senzig, Don, Calculator Algorithms, IEEE Compcon Reader Digest, IEEE Catalog No. 75 CH 0920-9C, pp139-141, IEEE, 1975.
- V.D.Baykov,S.A.Seljutin, Elementary functions evaluation in microcalculators, Moscow, Radio & svjaz,1982,64p.
- Vladimir D.Baykov, Vladimir B.Smolov, Special-purpose processors: iterative algorithms and structures, Moscow, Radio & svjaz, 1985, 288 pages
- M. E. Frerking, Digital Signal Processing in Communication Systems, 1994
- Vitit Kantabutra, On hardware for computing exponential and trigonometric functions, IEEE Trans. Computers 45 (3), 328-339 (1996)
- Andraka, Ray, A survey of CORDIC algorithms for FPGA based computers
- Henry Briggs, Arithmetica Logarithmica. London, 1624, folio
- CORDIC Bibliography Site
- The secret of the algorithms, Jacques Laporte, Paris 1981
- Digit by digit methods, Jacques Laporte, Paris 2006
- Ayan Banerjee, FPGA realization of a CORDIC based FFT processor for biomedical signal processing, Kharagpur, 2001
- This page was originally created from The CORDIC Algorithm at Wikipedia. | http://en.m.wikibooks.org/wiki/Trigonometry/For_Enthusiasts/The_CORDIC_Algorithm | 13 |
71 | In physics and fluid mechanics, a boundary layer is the layer of fluid in the immediate vicinity of a bounding surface where the effects of viscosity are significant. In the Earth's atmosphere, the planetary boundary layer is the air layer near the ground affected by diurnal heat, moisture or momentum transfer to or from the surface. On an aircraft wing the boundary layer is the part of the flow close to the wing, where viscous forces distort the surrounding non-viscous flow. See Reynolds number.
Laminar boundary layers can be loosely classified according to their structure and the circumstances under which they are created. The thin shear layer which develops on an oscillating body is an example of a Stokes boundary layer, while the Blasius boundary layer refers to the well-known similarity solution near an attached flat plate held in an oncoming unidirectional flow. When a fluid rotates and viscous forces are balanced by the Coriolis effect (rather than convective inertia), an Ekman layer forms. In the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously.
The aerodynamic boundary layer was first defined by Ludwig Prandtl in a paper presented on August 12, 1904 at the third International Congress of Mathematicians in Heidelberg, Germany. It simplifies the equations of fluid flow by dividing the flow field into two areas: one inside the boundary layer, dominated by viscosity and creating the majority of drag experienced by the boundary body; and one outside the boundary layer, where viscosity can be neglected without significant effects on the solution. This allows a closed-form solution for the flow in both areas, a significant simplification of the full Navier–Stokes equations. The majority of the heat transfer to and from a body also takes place within the boundary layer, again allowing the equations to be simplified in the flow field outside the boundary layer. The pressure distribution throughout the boundary layer in the direction normal to the surface (such as an airfoil) remains constant throughout the boundary layer, and is the same as on the surface itself.
The thickness of the velocity boundary layer is normally defined as the distance from the solid body at which the viscous flow velocity is 99% of the freestream velocity (the surface velocity of an inviscid flow). An alternative definition, the displacement thickness, recognizes that the boundary layer represents a deficit in mass flow compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the inviscid case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of a solid object be zero and the fluid temperature be equal to the temperature of the surface. The flow velocity will then increase rapidly within the boundary layer, governed by the boundary layer equations, below.
The thermal boundary layer thickness is similarly the distance from the body at which the temperature is 99% of the temperature found from an inviscid solution. The ratio of the two thicknesses is governed by the Prandtl number. If the Prandtl number is 1, the two boundary layers are the same thickness. If the Prandtl number is greater than 1, the thermal boundary layer is thinner than the velocity boundary layer. If the Prandtl number is less than 1, which is the case for air at standard conditions, the thermal boundary layer is thicker than the velocity boundary layer.
In high-performance designs, such as gliders and commercial aircraft, much attention is paid to controlling the behavior of the boundary layer to minimize drag. Two effects have to be considered. First, the boundary layer adds to the effective thickness of the body, through the displacement thickness, hence increasing the pressure drag. Secondly, the shear forces at the surface of the wing create skin friction drag.
At high Reynolds numbers, typical of full-sized aircraft, it is desirable to have a laminar boundary layer. This results in a lower skin friction due to the characteristic velocity profile of laminar flow. However, the boundary layer inevitably thickens and becomes less stable as the flow develops along the body, and eventually becomes turbulent, the process known as boundary layer transition. One way of dealing with this problem is to suck the boundary layer away through a porous surface (see Boundary layer suction). This can reduce drag, but is usually impractical due to its mechanical complexity and the power required to move the air and dispose of it. Natural laminar flow techniques push the boundary layer transition aft by reshaping the aerofoil or fuselage so that its thickest point is more aft and less thick. This reduces the velocities in the leading part and the same Reynolds number is achieved with a greater length.
At lower Reynolds numbers, such as those seen with model aircraft, it is relatively easy to maintain laminar flow. This gives low skin friction, which is desirable. However, the same velocity profile which gives the laminar boundary layer its low skin friction also causes it to be badly affected by adverse pressure gradients. As the pressure begins to recover over the rear part of the wing chord, a laminar boundary layer will tend to separate from the surface. Such flow separation causes a large increase in the pressure drag, since it greatly increases the effective size of the wing section. In these cases, it can be advantageous to deliberately trip the boundary layer into turbulence at a point prior to the location of laminar separation, using a turbulator. The fuller velocity profile of the turbulent boundary layer allows it to sustain the adverse pressure gradient without separating. Thus, although the skin friction is increased, overall drag is decreased. This is the principle behind the dimpling on golf balls, as well as vortex generators on aircraft. Special wing sections have also been designed which tailor the pressure recovery so laminar separation is reduced or even eliminated. This represents an optimum compromise between the pressure drag from flow separation and skin friction from induced turbulence.
When using half-models in wind tunnels, a peniche is sometimes used to reduce or eliminate the effect of the boundary layer.
|This section requires expansion. (April 2009)|
Many of the principles that apply to aircraft also apply to ships, submarines, and offshore platforms.
For ships, unlike aircraft, one deals with incompressible flows, where change in water density is negligible (a pressure rises close to 1000kPa leads to a change of only 2–3 kg/m3). This field of fluid dynamics is called hydrodynamics. A ship engineer designs for hydrodynamics first, and for strength only later. The boundary layer development, breakdown, and separation become critical because the high viscosity of water produces high shear stresses. Another consequence of high viscosity is the slip stream effect, in which the ship moves like a spear tearing through a sponge at high velocity.
Boundary layer equations
The deduction of the boundary layer equations was one of the most important advances in fluid dynamics (Anderson, 2005). Using an order of magnitude analysis, the well-known governing Navier–Stokes equations of viscous fluid flow can be greatly simplified within the boundary layer. Notably, the characteristic of the partial differential equations (PDE) becomes parabolic, rather than the elliptical form of the full Navier–Stokes equations. This greatly simplifies the solution of the equations. By making the boundary layer approximation, the flow is divided into an inviscid portion (which is easy to solve by a number of methods) and the boundary layer, which is governed by an easier to solve PDE. The continuity and Navier–Stokes equations for a two-dimensional steady incompressible flow in Cartesian coordinates are given by
where and are the velocity components, is the density, is the pressure, and is the kinematic viscosity of the fluid at a point.
The approximation states that, for a sufficiently high Reynolds number the flow over a surface can be divided into an outer region of inviscid flow unaffected by viscosity (the majority of the flow), and a region close to the surface where viscosity is important (the boundary layer). Let and be streamwise and transverse (wall normal) velocities respectively inside the boundary layer. Using scale analysis, it can be shown that the above equations of motion reduce within the boundary layer to become
and if the fluid is incompressible (as liquids are under standard conditions):
The asymptotic analysis also shows that , the wall normal velocity, is small compared with the streamwise velocity, and that variations in properties in the streamwise direction are generally much lower than those in the wall normal direction.
Since the static pressure is independent of , then pressure at the edge of the boundary layer is the pressure throughout the boundary layer at a given streamwise position. The external pressure may be obtained through an application of Bernoulli's equation. Let be the fluid velocity outside the boundary layer, where and are both parallel. This gives upon substituting for the following result
with the boundary condition
For a flow in which the static pressure also does not change in the direction of the flow then
so remains constant.
Therefore, the equation of motion simplifies to become
These approximations are used in a variety of practical flow problems of scientific and engineering interest. The above analysis is for any instantaneous laminar or turbulent boundary layer, but is used mainly in laminar flow studies since the mean flow is also the instantaneous flow because there are no velocity fluctuations present.
Turbulent boundary layers
The treatment of turbulent boundary layers is far more difficult due to the time-dependent variation of the flow properties. One of the most widely used techniques in which turbulent flows are tackled is to apply Reynolds decomposition. Here the instantaneous flow properties are decomposed into a mean and fluctuating component. Applying this technique to the boundary layer equations gives the full turbulent boundary layer equations not often given in literature:
Using the same order-of-magnitude analysis as for the instantaneous equations, these turbulent boundary layer equations generally reduce to become in their classical form:
The additional term in the turbulent boundary layer equations is known as the Reynolds shear stress and is unknown a priori. The solution of the turbulent boundary layer equations therefore necessitates the use of a turbulence model, which aims to express the Reynolds shear stress in terms of known flow variables or derivatives. The lack of accuracy and generality of such models is a major obstacle in the successful prediction of turbulent flow properties in modern fluid dynamics.
A laminar sub-layer exists in the turbulent zone; it occurs due to those fluid molecules which are still in the very proximity of the surface, where the shear stress is maximum and the velocity of fluid molecules is zero.
Heat and mass transfer
In 1928, the French engineer André Lévêque observed that convective heat transfer in a flowing fluid is affected only by the velocity values very close to the surface. For flows of large Prandtl number, the temperature/mass transition from surface to freestream temperature takes place across a very thin region close to the surface. Therefore, the most important fluid velocities are those inside this very thin region in which the change in velocity can be considered linear with normal distance from the surface. In this way, for
when , then
where θ is the tangent of the Poiseuille parabola intersecting the wall. Although Lévêque's solution was specific to heat transfer into a Poiseuille flow, his insight helped lead other scientists to an exact solution of the thermal boundary-layer problem. Schuh observed that in a boundary-layer, u is again a linear function of y, but that in this case, the wall tangent is a function of x. He expressed this with a modified version of Lévêque's profile,
This results in a very good approximation, even for low numbers, so that only liquid metals with much less than 1 cannot be treated this way. In 1962, Kestin and Persen published a paper describing solutions for heat transfer when the thermal boundary layer is contained entirely within the momentum layer and for various wall temperature distributions. For the problem of a flat plate with a temperature jump at , they propose a substitution that reduces the parabolic thermal boundary-layer equation to an ordinary differential equation. The solution to this equation, the temperature at any point in the fluid, can be expressed as an incomplete gamma function. Schlichting proposed an equivalent substitution that reduces the thermal boundary-layer equation to an ordinary differential equation whose solution is the same incomplete gamma function.
Convective Transfer Constants from Boundary Layer Analysis
= the thickness of the boundary layer: the region of flow where the velocity is less than 99% of the far field velocity ; is position along the semi-infinite plate, and is the Reynolds Number given by ( density and dynamic viscosity).
The Blasius solution uses boundary conditions in a dimensionless form:
Note that in many cases, the no-slip boundary condition holds that , the fluid velocity at the surface of the plate equals the velocity of the plate at all locations. If the plate is not moving, then . A much more complicated derivation is required if fluid slip is allowed.
In fact, the Blasius solution for laminar velocity profile in the boundary layer above a semi-infinite plate can be easily extended to describe Thermal and Concentration boundary layers for heat and mass transfer respectively. Rather than the differential x-momentum balance (equation of motion), this uses a similarly derived Energy and Mass balance:
For the momentum balance, kinematic viscosity can be considered to be the momentum diffusivity. In the energy balance this is replaced by thermal diffusivity , and by mass diffusivity in the mass balance. In thermal diffusivity of a substance, is its thermal conductivity, is its density and is its heat capacity. Subscript AB denotes diffusivity of species A diffusing into species B.
Under the assumption that , these equations become equivalent to the momentum balance. Thus, for Prandtl number and Schmidt number the Blasius solution applies directly.
Accordingly, this derivation uses a related form of the boundary conditions, replacing with or (absolute temperature or concentration of species A). The subscript S denotes a surface condition.
Using the streamline function Blasius obtained the following solution for the shear stress at the surface of the plate.
And via the boundary conditions, it is known that
We are given the following relations for heat/mass flux out of the surface of the plate
Where are the regions of flow where and are less than 99% of their far field values.
Because the Prandtl number of a particular fluid is not often unity, German engineer E. Polhausen who worked with Ludwig Prandtl attempted to empirically extend these equations to apply for . His results can be applied to as well. He found that for Prandtl number greater than 0.6, the thermal boundary layer thickness was approximately given by:
From this solution, it is possible to characterize the convective heat/mass transfer constants based on the region of boundary layer flow. Fourier’s law of conduction and Newton’s Law of Cooling are combined with the flux term derived above and the boundary layer thickness.
This gives the local convective constant at one point on the semi-infinite plane. Integrating over the length of the plate gives an average
Following the derivation with mass transfer terms ( = convective mass transfer constant, = diffusivity of species A into species B, ), the following solutions are obtained:
These solutions apply for laminar flow with a Prandtl/Schmidt number greater than 0.6.
Boundary layer turbine
This effect was exploited in the Tesla turbine, patented by Nikola Tesla in 1913. It is referred to as a bladeless turbine because it uses the boundary layer effect and not a fluid impinging upon the blades as in a conventional turbine. Boundary layer turbines are also known as cohesion-type turbine, bladeless turbine, and Prandtl layer turbine (after Ludwig Prandtl).
See also
- Boundary layer separation
- Boundary-layer thickness
- Boundary layer suction
- Boundary layer control
- Coandă effect
- Facility for Airborne Atmospheric Measurements
- Logarithmic law of the wall
- Planetary boundary layer
- Shape factor (boundary layer flow)
- Shear stress
- Lévêque, A. (1928). "Les lois de la transmission de chaleur par convection". Annales des Mines ou Recueil de Mémoires sur l'Exploitation des Mines et sur les Sciences et les Arts qui s'y Rattachent, Mémoires (in French) XIII (13): 201–239.
- Niall McMahon. "André Lévêque p285, a review of his velocity profile approximation".
- Martin, H. (2002). "The generalized Lévêque equation and its practical use for the prediction of heat and mass transfer rates from pressure drop". Chemical Engineering Science 57 (16). pp. 3217–3223. doi:10.1016/S0009-2509(02)00194-X.
- Schuh, H. (1953). "On asymptotic solutions for the heat transfer at varying wall temperatures in a laminar boundary layer with Hartree's velocity profiles". Jour. Aero. Sci. 20 (2). pp. 146–147.
- Kestin, J. and Persen, L.N. (1962). "The transfer of heat across a turbulent boundary layer at very high prandtl numbers". Int. J. Heat Mass Transfer 5: 355–371.
- Schlichting, H. (1979). Boundary-Layer Theory (7 ed.). New York (USA): McGraw-Hill.
- Blasius, H. (1908). "Grenzschichten in Flüssigkeiten mit kleiner Reibung". Z. Math. Phys. 56: 1–37. (English translation)
- Martin, Michael J. Blasius boundary layer solution with slip flow conditions. AIP conference proceedings 585.1 2001: 518-523. American Institute of Physics. 24 Apr 2013.
- Geankoplis, Christie J. Transport Processes and Separation Process Principles: (includes Unit Operations). Fourth ed. Upper Saddle River, NJ: Prentice Hall Professional Technical Reference, 2003. Print.
- Pohlhausen, E. (1921), Der Wärmeaustausch zwischen festen Körpern und Flüssigkeiten mit kleiner reibung und kleiner Wärmeleitung. Z. angew. Math. Mech., 1: 115–121. doi: 10.1002/zamm.19210010205
- Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages. ISBN 978-0-415-49271-3.
- A.D. Polyanin and V.F. Zaitsev, Handbook of Nonlinear Partial Differential Equations, Chapman & Hall/CRC Press, Boca Raton – London, 2004. ISBN 1-58488-355-3
- A.D. Polyanin, A.M. Kutepov, A.V. Vyazmin, and D.A. Kazenin, Hydrodynamics, Mass and Heat Transfer in Chemical Engineering, Taylor & Francis, London, 2002. ISBN 0-415-27237-8
- Hermann Schlichting, Klaus Gersten, E. Krause, H. Jr. Oertel, C. Mayes "Boundary-Layer Theory" 8th edition Springer 2004 ISBN 3-540-66270-7
- John D. Anderson, Jr., "Ludwig Prandtl's Boundary Layer", Physics Today, December 2005
- Anderson, John (1992). Fundamentals of Aerodynamics (2nd edition ed.). Toronto: S.S.CHAND. pp. 711–714. ISBN 0-07-001679-8.
- H. Tennekes and J. L. Lumley, "A First Course in Turbulence", The MIT Press, (1972).
- National Science Digital Library – Boundary Layer
- Moore, Franklin K., "Displacement effect of a three-dimensional boundary layer". NACA Report 1124, 1953.
- Benson, Tom, "Boundary layer". NASA Glenn Learning Technologies.
- Boundary layer separation
- Boundary layer equations: Exact Solutions – from EqWorld
- Jones, T.V. BOUNDARY LAYER HEAT TRANSFER | http://en.wikipedia.org/wiki/Boundary_layer | 13 |
64 | The Pythagorean Theorem is one of the most remarkable theorems in all of mathematics. It has a treasure trove of ramifications up its sleeve, any one of which could provide you with invaluable help on the GMAT Quantitative section. For example, consider this practice problem.
1) Consider the following three triangles
I. a triangle with sides 6-9-10
II. a triangle with sides 8-14-17
III. a triangle with sides 5-12-14
Which of the following gives a complete set of the triangles that have at least one obtuse angle, that is, an angle greater than 90°?
- I & II
- II & III
The basic theorem
Everyone knows the basic formula:
To distinguish this formula from the theorem itself (something not often done), I will refer to that as the “Pythagorean formula.” Many folks don’t realize that the Theorem is something different from this single formula. Some people realize that, in order for this formula to work, the triangle must be a right triangle, and some people even remember that “c” has to be the hypotenuse of a right triangle. This leads to the basic statement of the theorem:
If a triangle is a right triangle with sides a < b < c, then the Pythagorean formula is true of the sides.
One important thing to appreciate about Mr. Pythagoras’ famous theorem is that it goes both ways: in logical parlance, it is “biconditional.” In other words
A. If you know the triangle is a right triangle, if you are given that fact, then you can conclude that the Pythagorean formula works for its sides.
(B) If you are given the three sides of a triangle, and you know (or can verify) that these three sides satisfy the Pythagorean formula, then that triangle absolutely must be a right triangle.
In other words, if you consider these two qualities (i) being a right triangle, and (ii) sides satisfying the Pythagorean formula, then, those two qualities always come together, and either one necessitates the other. To use highly dramatic language, God Himself could not create a triangle that has one of those qualities and not the other.
There are common three-number sets know as Pythagorean triplets: these sets, such as (3, 4, 5), are sets of numbers that satisfy the Pythagorean formula, which necessarily means they would also be the sides of a right triangle. The reader familiar with the common Pythagorean triplets discussed in that post will find the numbers in the above problem evocative of, but not equal to, these sets of triplets.
Most triangles in the world are not right triangles. The Pythagorean Theorem applies only to right triangles, but with a little reconfiguring, we can also use it to deduce facts about other triangles. We know that if there’s an equal sign in the Pythagorean formula, it means the triangle is a right triangle. What if there’s either a greater-than or a less-than sign instead of equals sign?
Case one: Bigger big side or shorter legs
For this case, we will consider this inequality:
This would be true if we started with a right triangle, and then made the hypotenuse bigger while leaving the two legs the same size. That pushes the two legs apart.
This would also be true if we made either one of the smaller, while leave the other two sides the same size. This pulls the vertex that previously had a right angle closer to the former hypotenuse, which has the effect of pushing the legs apart.
In either case, notice that the right angle becomes an obtuse angle. This allows us to state variation #1 on the Pythagorean Theorem:
Notice: an obtuse triangle is a triangle in which one angle is obtuse. It is impossible for more than one angle to be obtuse, because then the angles in the triangle would add up to more than 180°.
Case two: Smaller big side or bigger legs
For this case, we will consider this inequality:
This would be true if we started with a right triangle, and then made the hypotenuse smaller while leaving the two legs the same size. That pulls the two legs toward each other.
This would also be true if we made either one of the bigger, while leave the other two sides the same size. This pushes the vertex that previously had a right angle further from to the former hypotenuse, which has the effect of pulling the legs toward each other.
In either case, notice that the right angle becomes an acute angle. This allows us to state variation #2 on the Pythagorean Theorem:
Notice: an acute triangle is a triangle in which all three angles are acute. In any triangle, at least two of the angles must be acute. In an acute triangle, even the largest angle is acute.
We can combine all this information in one place. Suppose a triangle has three sides such that a < b < c. Then:
Remembering how to morph the Pythagorean Theorem could help you in a rare GMAT problem such as the one at the top. More importantly, perhaps this discussion gave you some insight into the interrelationship between math equations and spatial reasoning, and insights along those lines certain could help you in challenging GMAT questions. Now that you have seen all of this, take another look at the practice problem before reading the solutions below.
Practice problem explanation
1) Remembering the common Pythagorean Triplets is very helpful in this problem. Triangle I is very close to the (6, 8, 10) right triangle, but we made one of the legs longer. This makes the sum of the squares of the legs greater than the longest side squared, so Triangle I is acute. Triangle II is very close to the (8, 15, 17) right triangle, but we made one of the legs shorter. This makes the sum of the squares of the legs less than the longest side squared, so Triangle II is obtuse. Triangle III is very close to the (5, 12, 13) right triangle, but we made the longest side bigger. This makes the sum of the squares of the legs less than the longest side squared, so Triangle III is obtuse. Answer = E | http://magoosh.com/gmat/2012/re-thinking-pythagoras-is-a-triangle-obtuse/ | 13 |
176 | In astronomy the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness. These color-magnitude plots are known as Hertzsprung–Russell diagrams after their co-developers, Ejnar Hertzsprung and Henry Norris Russell. Stars on this band are known as main-sequence stars or "dwarf" stars.
After a star has formed, it creates energy at the hot, dense core region through the nuclear fusion of hydrogen atoms into helium. During this stage of the star's lifetime, it is located along the main sequence at a position determined primarily by its mass, but also based upon its chemical composition and other factors. All main-sequence stars are in hydrostatic equilibrium, where outward thermal pressure from the hot core is balanced by the inward gravitational pressure from the overlying layers. The strong dependence of the rate of energy generation in the core on the temperature and pressure helps to sustain this balance. Energy generated at the core makes its way to the surface and is radiated away at the photosphere. The energy is carried by either radiation or convection, with the latter occurring in regions with steeper temperature gradients, higher opacity or both.
The main sequence is sometimes divided into upper and lower parts, based on the dominant process that a star uses to generate energy. Stars below about 1.5 times the mass of the Sun (or 1.5 solar masses) primarily fuse hydrogen atoms together in a series of stages to form helium, a sequence called the proton–proton chain. Above this mass, in the upper main sequence, the nuclear fusion process mainly uses atoms of carbon, nitrogen and oxygen as intermediaries in the CNO cycle that produces helium from hydrogen atoms. Main-sequence stars with more than two solar masses undergo convection in their core regions, which acts to stir up the newly created helium and maintain the proportion of fuel needed for fusion to occur. Below this mass, stars have cores that are entirely radiative with convective zones near the surface. With decreasing stellar mass, the proportion of the star forming a convective envelope steadily increases, while main-sequence stars below 0.4 solar masses undergo convection throughout their mass. When core convection does not occur, a helium-rich core develops surrounded by an outer layer of hydrogen.
In general, the more massive the star the shorter its lifespan on the main sequence. After the hydrogen fuel at the core has been consumed, the star evolves away from the main sequence on the HR diagram. The behavior of a star now depends on its mass, with stars below 0.23 solar masses becoming white dwarfs directly, while stars with up to ten solar masses pass through a red giant stage. More massive stars can explode as a supernova, or collapse directly into a black hole.
In the early part of the 20th century, information about the types and distances of stars became more readily available. The spectra of stars were shown to have distinctive features, which allowed them to be categorized. Annie Jump Cannon and Edward C. Pickering at Harvard College Observatory developed a method of categorization that became known as the Harvard Classification Scheme, published in the Harvard Annals in 1901.
In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung noticed that the reddest stars—classified as K and M in the Harvard scheme—could be divided into two distinct groups. These stars are either much brighter than the Sun, or much fainter. To distinguish these groups, he called them "giant" and "dwarf" stars. The following year he began studying star clusters; large groupings of stars that are co-located at approximately the same distance. He published the first plots of color versus luminosity for these stars. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence.
At Princeton University, Henry Norris Russell was following a similar course of research. He was studying the relationship between the spectral classification of stars and their actual brightness as corrected for distance—their absolute magnitude. For this purpose he used a set of stars that had reliable parallaxes and many of which had been categorized at Harvard. When he plotted the spectral types of these stars against their absolute magnitude, he found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be predicted with reasonable accuracy.
Of the red stars observed by Hertzsprung, the dwarf stars also followed the spectra-luminosity relationship discovered by Russell. However, the giant stars are much brighter than dwarfs and so, do not follow the same relationship. Russell proposed that the "giant stars must have low density or great surface-brightness, and the reverse is true of dwarf stars". The same curve also showed that there were very few faint white stars.
In 1933, Bengt Strömgren introduced the term Hertzsprung–Russell diagram to denote a luminosity-spectral class diagram. This name reflected the parallel development of this technique by both Hertzsprung and Russell earlier in the century.
As evolutionary models of stars were developed during the 1930s, it was shown that, for stars of a uniform chemical composition, a relationship exists between a star's mass and its luminosity and radius. That is, for a given mass and composition, there is a unique solution for determining the star's radius and luminosity. This became known as the Vogt-Russell theorem; named after Heinrich Vogt and Henry Norris Russell. By this theorem, once a star's chemical composition and its position on the main sequence is known, so too is the star's mass and radius. (However, it was subsequently discovered that the theorem breaks down somewhat for stars of non-uniform composition.)
A refined scheme for stellar classification was published in 1943 by W. W. Morgan and P. C. Keenan. The MK classification assigned each star a spectral type—based on the Harvard classification—and a luminosity class. The Harvard classification had been developed by assigning a different letter to each star based on the strength of the hydrogen spectra line, before the relationship between spectra and temperature was known. When ordered by temperature and when duplicate classes were removed, the spectral types of stars followed, in order of decreasing temperature with colors ranging from blue to red, the sequence O, B, A, F, G, K and M. (A popular mnemonic for memorizing this sequence of stellar classes is "Oh Be A Fine Girl/Guy, Kiss Me".) The luminosity class ranged from I to V, in order of decreasing luminosity. Stars of luminosity class V belonged to the main sequence.
When a protostar is formed from the collapse of a giant molecular cloud of gas and dust in the local interstellar medium, the initial composition is homogeneous throughout, consisting of about 70% hydrogen, 28% helium and trace amounts of other elements, by mass. The initial mass of the star depends on the local conditions within the cloud. (The mass distribution of newly formed stars is described empirically by the initial mass function.) During the initial collapse, this pre-main-sequence star generates energy through gravitational contraction. Upon reaching a suitable density, energy generation is begun at the core using an exothermic nuclear fusion process that converts hydrogen into helium.
Once nuclear fusion of hydrogen becomes the dominant energy production process and the excess energy gained from gravitational contraction has been lost, the star lies along a curve on the Hertzsprung–Russell diagram (or HR diagram) called the standard main sequence. Astronomers will sometimes refer to this stage as "zero age main sequence", or ZAMS. The ZAMS curve can be calculated using computer models of stellar properties at the point when stars begin hydrogen fusion. From this point, the brightness and surface temperature of stars typically increase with age.
A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core has been consumed, then begins to evolve into a more luminous star. (On the HR diagram, the evolving star moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime.
The majority of stars on a typical HR diagram lie along the main sequence curve. This line is pronounced because both the spectral type and the luminosity depend only on a star's mass, at least to zeroth order approximation, as long as it is fusing hydrogen at its core—and that is what almost all stars spend most of their "active" lives doing.
The temperature of a star determines its spectral type via its effect on the physical properties of plasma in its photosphere. A star's energy emission as a function of wavelength is influenced by both its temperature and composition. A key indicator of this energy distribution is given by the color index, B − V, which measures the star's magnitude in blue (B) and green-yellow (V) light by means of filters.[note 1] This difference in magnitude provides a measure of a star's temperature.
Dwarf terminology
Main-sequence stars are called dwarf stars, but this terminology is partly historical and can be somewhat confusing. For the cooler stars, dwarfs such as red dwarfs, orange dwarfs, and yellow dwarfs are indeed much smaller and dimmer than other stars of those colors. However, for hotter blue and white stars, the size and brightness difference between so-called dwarf stars that are on the main sequence and the so-called giant stars that are not becomes smaller; for the hottest stars it is not directly observable. For those stars the terms dwarf and giant refer to differences in spectral lines which indicate if a star is on the main sequence or off it. Nevertheless, very hot main-sequence stars are still sometimes called dwarfs, even though they have roughly the same size and brightness as the "giant" stars of that temperature.
The common use of dwarf to mean main sequence is confusing in another way, because there are dwarf stars which are not main-sequence stars. For example, white dwarfs are a different kind of star that is much smaller than main-sequence stars—being roughly the size of the Earth. These represent the final evolutionary stage of many main-sequence stars.
- L = 4πσR2Teff4
The mass, radius and luminosity of a star are closely interlinked, and their respective values can be approximated by three relations. First is the Stefan–Boltzmann law, which relates the luminosity L, the radius R and the surface temperature Teff. Second is the mass–luminosity relation, which relates the luminosity L and the mass M. Finally, the relationship between M and R is close to linear. The ratio of M to R increases by a factor of only three over 2.5 orders of magnitude of M. This relation is roughly proportional to the star's inner temperature TI, and its extremely slow increase reflects the fact that the rate of energy generation in the core strongly depends on this temperature, while it has to fit the mass–luminosity relation. Thus, a too high or too low temperature will result in stellar instability.
A better approximation is to take , the energy generation rate per unit mass, as ε is proportional to TI15, where TI is the core temperature. This is suitable for stars at least as massive as the Sun, exhibiting the CNO cycle, and gives the better fit R ∝ M0.78.
Sample parameters
The table below shows typical values for stars along the main sequence. The values of luminosity (L), radius (R) and mass (M) are relative to the Sun—a dwarf star with a spectral classification of G2 V. The actual values for a star may vary by as much as 20–30% from the values listed below.
Table of main-sequence stellar parameters Stellar
Radius Mass Luminosity Temperature Examples R/R☉ M/M☉ L/L☉ K O6 18 40 500,000 38,000 Theta1 Orionis C B0 7.4 18 20,000 30,000 Phi1 Orionis B5 3.8 6.5 800 16,400 Pi Andromedae A A0 2.5 3.2 80 10,800 Alpha Coronae Borealis A A5 1.7 2.1 20 8,620 Beta Pictoris F0 1.3 1.7 6 7,240 Gamma Virginis F5 1.2 1.3 2.5 6,540 Eta Arietis G0 1.05 1.10 1.26 5,920 Beta Comae Berenices G2 1.00 1.00 1.00 5,780 Sun[note 2] G5 0.93 0.93 0.79 5,610 Alpha Mensae K0 0.85 0.78 0.40 5,240 70 Ophiuchi A K5 0.74 0.69 0.16 4,410 61 Cygni A M0 0.63 0.47 0.063 3,920 Gliese 185 M5 0.32 0.21 0.0079 3,120 EZ Aquarii A M8 0.13 0.10 0.0008 2,660 Van Biesbroeck's star
Energy generation
All main-sequence stars have a core region where energy is generated by nuclear fusion. The temperature and density of this core are at the levels necessary to sustain the energy production that will support the remainder of the star. A reduction of energy production would cause the overlaying mass to compress the core, resulting in an increase in the fusion rate because of higher temperature and pressure. Likewise an increase in energy production would cause the star to expand, lowering the pressure at the core. Thus the star forms a self-regulating system in hydrostatic equilibrium that is stable over the course of its main sequence lifetime.
Main-sequence stars employ two types of hydrogen fusion processes, and the rate of energy generation from each type depends on the temperature in the core region. Astronomers divide the main sequence into upper and lower parts, based on which of the two is the dominant fusion process. In the lower main sequence, energy is primarily generated as the result of the proton-proton chain, which directly fuses hydrogen together in a series of stages to produce helium. Stars in the upper main sequence have sufficiently high core temperatures to efficiently use the CNO cycle. (See the chart.) This process uses atoms of carbon, nitrogen and oxygen as intermediaries in the process of fusing hydrogen into helium.
At a stellar core temperature of 18 million kelvins, the PP process and CNO cycle are equally efficient, and each type generates half of the star's net luminosity. As this is the core temperature of a star with about 1.5 solar masses, the upper main sequence consists of stars above this mass. Thus, roughly speaking, stars of spectral class F or cooler belong to the lower main sequence, while class A stars or hotter are upper main-sequence stars. The transition in primary energy production from one form to the other spans a range difference of less than a single solar mass. In the Sun, a one solar mass star, only 1.5% of the energy is generated by the CNO cycle. By contrast, stars with 1.8 solar masses or above generate almost their entire energy output through the CNO cycle.
The observed upper limit for a main-sequence star is 120–200 solar masses. The theoretical explanation for this limit is that stars above this mass can not radiate energy fast enough to remain stable, so any additional mass will be ejected in a series of pulsations until the star reaches a stable limit. The lower limit for sustained proton-proton nuclear fusion is about 0.08 solar masses. Below this threshold are sub-stellar objects that can not sustain hydrogen fusion, known as brown dwarfs.
Because there is a temperature difference between the core and the surface, or photosphere, energy is transported outward. The two modes for transporting this energy are radiation and convection. A radiation zone, where energy is transported by radiation, is stable against convection and there is very little mixing of the plasma. By contrast, in a convection zone the energy is transported by bulk movement of plasma, with hotter material rising and cooler material descending. Convection is a more efficient mode for carrying energy than radiation, but it will only occur under conditions that create a steep temperature gradient.
In massive stars (above 10 solar masses) the rate of energy generation by the CNO cycle is very sensitive to temperature, so the fusion is highly concentrated at the core. Consequently, there is a high temperature gradient in the core region, which results in a convection zone for more efficient energy transport. This mixing of material around the core removes the helium ash from the hydrogen-burning region, allowing more of the hydrogen in the star to be consumed during the main-sequence lifetime. The outer regions of a massive star transport energy by radiation, with little or no convection.
Intermediate mass stars such as Sirius may transport energy primarily by radiation, with a small core convection region. Medium-sized, low mass stars like the Sun have a core region that is stable against convection, with a convection zone near the surface that mixes the outer layers. This results in a steady buildup of a helium-rich core, surrounded by a hydrogen-rich outer region. By contrast, cool, very low-mass stars (below 0.4 solar masses) are convective throughout. Thus the helium produced at the core is distributed across the star, producing a relatively uniform atmosphere and a proportionately longer main sequence lifespan.
Luminosity-color variation
As non-fusing helium ash accumulates in the core of a main-sequence star, the reduction in the abundance of hydrogen per unit mass results in a gradual lowering of the fusion rate within that mass. Since it is the outflow of fusion-supplied energy that supports the higher layers of the star, the core is compressed, producing higher temperatures and pressures. Both factors increase the rate of fusion thus moving the equilibrium towards a smaller, denser, hotter core producing more energy whose increased outflow pushes the higher layers further out. Thus there is a steady increase in the luminosity and radius of the star over time. For example, the luminosity of the early Sun was only about 70% of its current value. As a star ages this luminosity increase changes its position on the HR diagram. This effect results in a broadening of the main sequence band because stars are observed at random stages in their lifetime. That is, the main sequence band develops a thickness on the HR diagram; it is not simply a narrow line.
Other factors that broaden the main sequence band on the HR diagram include uncertainty in the distance to stars and the presence of unresolved binary stars that can alter the observed stellar parameters. However, even perfect observation would show a fuzzy main sequence because mass is not the only parameter that affects a star's color and luminosity. Variations in chemical composition caused by the initial abundances, the star's evolutionary status, interaction with a close companion, rapid rotation, or a magnetic field can all slightly change a main-sequence star's HR diagram position, to name just a few factors. As an example, there are metal-poor stars (with a very low abundance of elements with higher atomic numbers than helium) that lie just below the main sequence and are known as subdwarfs. These stars are fusing hydrogen in their cores and so they mark the lower edge of main sequence fuzziness caused by variance in chemical composition.
A nearly vertical region of the HR diagram, known as the instability strip, is occupied by pulsating variable stars known as Cepheid variables. These stars vary in magnitude at regular intervals, giving them a pulsating appearance. The strip intersects the upper part of the main sequence in the region of class A and F stars, which are between one and two solar masses. Pulsating stars in this part of the instability strip that intersects the upper part of the main sequence are called Delta Scuti variables. Main-sequence stars in this region experience only small changes in magnitude and so this variation is difficult to detect. Other classes of unstable main-sequence stars, like Beta Cephei variables, are unrelated to this instability strip.
The total amount of energy that a star can generate through nuclear fusion of hydrogen is limited by the amount of hydrogen fuel that can be consumed at the core. For a star in equilibrium, the energy generated at the core must be at least equal to the energy radiated at the surface. Since the luminosity gives the amount of energy radiated per unit time, the total life span can be estimated, to first approximation, as the total energy produced divided by the star's luminosity.
For a star with at least 0.5 solar masses, once the hydrogen supply in its core is exhausted and it expands to become a red giant, it can start to fuse helium atoms to form carbon. The energy output of the helium fusion process per unit mass is only about a tenth the energy output of the hydrogen process, and the luminosity of the star increases. This results in a much shorter length of time in this stage compared to the main sequence lifetime. (For example, the Sun is predicted to spend 130 million years burning helium, compared to about 12 billion years burning hydrogen.) Thus, about 90% of the observed stars above 0.5 solar masses will be on the main sequence. On average, main-sequence stars are known to follow an empirical mass-luminosity relationship. The luminosity (L) of the star is roughly proportional to the total mass (M) as the following power law:
This relationship applies to main-sequence stars in the range 0.1–50 solar masses.
The amount of fuel available for nuclear fusion is proportional to the mass of the star. Thus, the lifetime of a star on the main sequence can be estimated by comparing it to solar evolutionary models. The Sun has been a main-sequence star for about 4.5 billion years and it will become a red giant in 6.5 billion years, for a total main sequence lifetime of roughly 1010 years. Hence:
where M and L are the mass and luminosity of the star, respectively, is a solar mass, is the solar luminosity and is the star's estimated main sequence lifetime.
Although more massive stars have more fuel to burn and might be expected to last longer, they also must radiate a proportionately greater amount with increased mass. Thus, the most massive stars may remain on the main sequence for only a few million years, while stars with less than a tenth of a solar mass may last for over a trillion years.
The exact mass-luminosity relationship depends on how efficiently energy can be transported from the core to the surface. A higher opacity has an insulating effect that retains more energy at the core, so the star does not need to produce as much energy to remain in hydrostatic equilibrium. By contrast, a lower opacity means energy escapes more rapidly and the star must burn more fuel to remain in equilibrium. Note, however, that a sufficiently high opacity can result in energy transport via convection, which changes the conditions needed to remain in equilibrium.
In high-mass main-sequence stars, the opacity is dominated by electron scattering, which is nearly constant with increasing temperature. Thus the luminosity only increases as the cube of the star's mass. For stars below 10 times the solar mass, the opacity becomes dependent on temperature, resulting in the luminosity varying approximately as the fourth power of the star's mass. For very low mass stars, molecules in the atmosphere also contribute to the opacity. Below about 0.5 solar masses, the luminosity of the star varies as the mass to the power of 2.3, producing a flattening of the slope on a graph of mass versus luminosity. Even these refinements are only an approximation, however, and the mass-luminosity relation can vary depending on a star's composition.
Evolutionary tracks
Once a main-sequence star consumes the hydrogen at its core, the loss of energy generation causes gravitational collapse to resume. For stars with less than 0.23 solar masses, they are predicted to become white dwarfs once energy generation by nuclear fusion of hydrogen at the core comes to a halt. For stars above this threshold with up to 10 solar masses, the hydrogen surrounding the helium core reaches sufficient temperature and pressure to undergo fusion, forming a hydrogen-burning shell. In consequence of this change, the outer envelope of the star expands and decreases in temperature, turning it into a red giant. At this point the star is evolving off the main sequence and entering the giant branch. The path the star now follows across the HR diagram, to the upper right of the main sequence, is called an evolutionary track.
The helium core of a red giant continues to collapse until it is entirely supported by electron degeneracy pressure—a quantum mechanical effect that restricts how closely matter can be compacted. For stars of more than about 0.5 solar masses, the core can reach a temperature where it becomes hot enough to burn helium into carbon via the triple alpha process. Stars with more than 5–7.5 solar masses can also fuse elements with higher atomic numbers. For stars with ten or more solar masses, this process can lead to an increasingly dense core that finally collapses, ejecting the star's overlying layers in a Type II supernova explosion, Type Ib supernova or Type Ic supernova.
When a cluster of stars is formed at about the same time, the life span of these stars will depend on their individual masses. The most massive stars will leave the main sequence first, followed steadily in sequence by stars of ever lower masses. Thus the stars will evolve in order of their position on the main sequence, proceeding from the most massive at the left toward the right of the HR diagram. The current position where stars in this cluster are leaving the main sequence is known as the turn-off point. By knowing the main sequence lifespan of stars at this point, it becomes possible to estimate the age of the cluster.
See also
- By measuring the difference between these values, this eliminates the need to correct the magnitudes for distance. However, see extinction.
- The Sun is a typical type G2V star.
- Harding E. Smith (1999-04-21). "The Hertzsprung-Russell Diagram". Gene Smith's Astronomy Tutorial. Center for Astrophysics & Space Sciences, University of California, San Diego. Retrieved 2009-10-29.
- Richard Powell (2006). "The Hertzsprung Russell Diagram". An Atlas of the Universe. Retrieved 2009-10-29.
- Adams, Fred C.; Laughlin, Gregory (April 1997). "A Dying Universe: The Long Term Fate and Evolution of Astrophysical Objects". Reviews of Modern Physics 69 (2): 337–372. arXiv:astro-ph/9701131. Bibcode:1997RvMP...69..337A. doi:10.1103/RevModPhys.69.337.
- Gilmore, Gerry (2004). "The Short Spectacular Life of a Superstar". Science 304 (5697): 1915–1916. doi:10.1126/science.1100370. PMID 15218132. Retrieved 2007-05-01.
- "The Brightest Stars Don't Live Alone". ESO Press Release. Retrieved 27 July 2012.
- Longair, Malcolm S. (2006). The Cosmic Century: A History of Astrophysics and Cosmology. Cambridge University Press. pp. 25–26. ISBN 0-521-47436-1.
- Brown, Laurie M.; Pais, Abraham; Pippard, A. B., eds. (1995). Twentieth Century Physics. Bristol; New York: Institute of Physics, American Institute of Physics. p. 1696. ISBN 0-7503-0310-7. OCLC 33102501.
- Russell, H. N. (1913). ""Giant" and "dwarf" stars". The Observatory 36: 324–329. Bibcode:1913Obs....36..324R.
- Strömgren, Bengt (1933). "On the Interpretation of the Hertzsprung-Russell-Diagram". Zeitschrift für Astrophysik 7: 222–248. Bibcode:1933ZA......7..222S.
- Schatzman, Evry L.; Praderie, Francoise (1993). The Stars. Springer. pp. 96–97. ISBN 3-540-54196-9.
- Morgan, W. W.; Keenan, P. C.; Kellman, E. (1943). An atlas of stellar spectra, with an outline of spectral classification. Chicago, Illinois: The University of Chicago press. Retrieved 2008-08-12.
- Unsöld, Albrecht (1969). The New Cosmos. Springer-Verlag New York Inc. p. 268. ISBN 0-387-90886-2.
- Gloeckler, George; Geiss, Johannes (2004). "Composition of the local interstellar medium as diagnosed with pickup ions". Advances in Space Research 34 (1): 53–60. Bibcode:2004AdSpR..34...53G. doi:10.1016/j.asr.2003.02.054.
- Kroupa, Pavel (2002-01-04). "The Initial Mass Function of Stars: Evidence for Uniformity in Variable Systems". Science 295 (5552): 82–91. arXiv:astro-ph/0201098. Bibcode:2002Sci...295...82K. doi:10.1126/science.1067524. PMID 11778039. Retrieved 2008-12-08.
- Schilling, Govert (2001). "New Model Shows Sun Was a Hot Young Star". Science 293 (5538): 2188–2189. doi:10.1126/science.293.5538.2188. PMID 11567116. Retrieved 2007-02-04.
- "Zero Age Main Sequence". The SAO Encyclopedia of Astronomy. Swinburne University. Retrieved 2007-12-09.
- Clayton, Donald D. (1983). Principles of Stellar Evolution and Nucleosynthesis. University of Chicago Press. ISBN 0-226-10953-4.
- "Main Sequence Stars". Australia Telescope Outreach and Education. Retrieved 2007-12-04.
- Moore, Patrick (2006). The Amateur Astronomer. Springer. ISBN 1-85233-878-4.
- "White Dwarf". COSMOS—The SAO Encyclopedia of Astronomy. Swinburne University. Retrieved 2007-12-04.
- "Origin of the Hertzsprung-Russell Diagram". University of Nebraska. Retrieved 2007-12-06.
- "A course on stars' physical properties, formation and evolution". University of St. Andrews. Retrieved 2010-05-18.
- Siess, Lionel (2000). "Computation of Isochrones". Institut d'Astronomie et d'Astrophysique, Université libre de Bruxelles. Retrieved 2007-12-06.[dead link]—Compare, for example, the model isochrones generated for a ZAMS of 1.1 solar masses. This is listed in the table as 1.26 times the solar luminosity. At metallicity Z=0.01 the luminosity is 1.34 times solar luminosity. At metallicity Z=0.04 the luminosity is 0.89 times the solar luminosity.
- Zombeck, Martin V. (1990). Handbook of Space Astronomy and Astrophysics (2nd ed.). Cambridge University Press. ISBN 0-521-34787-4. Retrieved 2007-12-06.
- "SIMBAD Astronomical Database". Centre de Données astronomiques de Strasbourg. Retrieved 2008-11-21.
- Luck, R. Earle; Heiter, Ulrike (2005). "Stars within 15 Parsecs: Abundances for a Northern Sample". The Astronomical Journal 129 (2): 1063–1083. Bibcode:2005AJ....129.1063L. doi:10.1086/427250.
- "LTT 2151 – High proper-motion Star". Centre de Données astronomiques de Strasbourg. Retrieved 2008-08-12.
- Staff (2008-01-01). "List of the Nearest Hundred Nearest Star Systems". Research Consortium on Nearby Stars. Retrieved 2008-08-12.
- Brainerd, Jerome James (2005-02-16). "Main-Sequence Stars". The Astrophysics Spectator. Retrieved 2007-12-04.
- Karttunen, Hannu (2003). Fundamental Astronomy. Springer. ISBN 3-540-00179-4.
- Bahcall, John N.; Pinsonneault, M. H.; Basu, Sarbani (2001-07-10). "Solar Models: Current Epoch and Time Dependences, Neutrinos, and Helioseismological Properties". The Astrophysical Journal 555 (2): 990–1012. arXiv:astro-ph/0212331. Bibcode:2003PhRvL..90m1301B. doi:10.1086/321493.
- Salaris, Maurizio; Cassisi, Santi (2005). Evolution of Stars and Stellar Populations. John Wiley and Sons. p. 128. ISBN 0-470-09220-3.
- Oey, M. S.; Clarke, C. J. (2005). "Statistical Confirmation of a Stellar Upper Mass Limit". The Astrophysical Journal 620 (1): L43–L46. arXiv:astro-ph/0501135. Bibcode:2005ApJ...620L..43O. doi:10.1086/428396.
- Ziebarth, Kenneth (1970). "On the Upper Mass Limit for Main-Sequence Stars". Astrophysical Journal 162: 947–962. Bibcode:1970ApJ...162..947Z. doi:10.1086/150726.
- Burrows, A.; Hubbard, W. B.; Saumon, D.; Lunine, J. I. (March 1993). "An expanded set of brown dwarf and very low mass star models". Astrophysical Journal, Part 1 406 (1): 158–171. Bibcode:1993ApJ...406..158B. doi:10.1086/172427.
- Aller, Lawrence H. (1991). Atoms, Stars, and Nebulae. Cambridge University Press. ISBN 0-521-31040-7.
- Bressan, A. G.; Chiosi, C.; Bertelli, G. (1981). "Mass loss and overshooting in massive stars". Astronomy and Astrophysics 102 (1): 25–30. Bibcode:1981A&A...102...25B.
- Lochner, Jim; Gibb, Meredith; Newman, Phil (2006-09-06). "Stars". NASA. Retrieved 2007-12-05.
- Gough, D. O. (1981). "Solar interior structure and luminosity variations". Solar Physics 74 (1): 21–34. Bibcode:1981SoPh...74...21G. doi:10.1007/BF00151270.
- Padmanabhan, Thanu (2001). Theoretical Astrophysics. Cambridge University Press. ISBN 0-521-56241-4.
- Wright, J. T. (2004). "Do We Know of Any Maunder Minimum Stars?". The Astronomical Journal 128 (3): 1273–1278. arXiv:astro-ph/0406338. Bibcode:2004AJ....128.1273W. doi:10.1086/423221. Retrieved 2007-12-06.
- Tayler, Roger John (1994). The Stars: Their Structure and Evolution. Cambridge University Press. ISBN 0-521-45885-4.
- Sweet, I. P. A.; Roy, A. E. (1953). "The structure of rotating stars". Monthly Notices of the Royal Astronomical Society 113: 701–715. Bibcode:1953MNRAS.113..701S.
- Burgasser, Adam J.; Kirkpatrick, J. Davy; Lepine, Sebastien (July 5–9, 2004). "Spitzer Studies of Ultracool Subdwarfs: Metal-poor Late-type M, L and T Dwarfs". Proceedings of the 13th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun. Hamburg, Germany: Dordrecht, D. Reidel Publishing Co. p. 237. Retrieved 2007-12-06.
- Green, S. F.; Jones, Mark Henry; Burnell, S. Jocelyn (2004). An Introduction to the Sun and Stars. Cambridge University Press. ISBN 0-521-54622-2.
- Richmond, Michael W. (2004-11-10). "Stellar evolution on the main sequence". Rochester Institute of Technology. Retrieved 2007-12-03.
- Prialnik, Dina (2000). An Introduction to the Theory of Stellar Structure and Evolution. Cambridge University Press. ISBN 0-521-65937-X.
- Schröder, K.-P.; Connon Smith, Robert (May 2008). "Distant future of the Sun and Earth revisited". Monthly Notices of the Royal Astronomical Society 386 (1): 155–163. arXiv:0801.4031. Bibcode:2008MNRAS.386..155S. doi:10.1111/j.1365-2966.2008.13022.x.
- Arnett, David (1996). Supernovae and Nucleosynthesis: An Investigation of the History of Matter, from the Big Bang to the Present. Princeton University Press. ISBN 0-691-01147-8.—Hydrogen fusion produces 8×1018 erg/g while helium fusion produces 8×1017 erg/g.
- For a detailed historical reconstruction of the theoretical derivation of this relationship by Eddington in 1924, see: Lecchini, Stefano (2007). How Dwarfs Became Giants. The Discovery of the Mass-Luminosity Relation. Bern Studies in the History and Philosophy of Science. ISBN 3-9522882-6-8.
- Rolfs, Claus E.; Rodney, William S. (1988). Cauldrons in the Cosmos: Nuclear Astrophysics. University of Chicago Press. p. 46. ISBN 0-226-72457-3.
- Sackmann, I.-Juliana; Boothroyd, Arnold I.; Kraemer, Kathleen E. (November 1993). "Our Sun. III. Present and Future". Astrophysical Journal 418: 457–468. Bibcode:1993ApJ...418..457S. doi:10.1086/173407.
- Hansen, Carl J.; Kawaler, Steven D. (1994). Stellar Interiors: Physical Principles, Structure, and Evolution. Birkhäuser. p. 28. ISBN 0-387-94138-X.
- Laughlin, Gregory; Bodenheimer, Peter; Adams, Fred C. (1997). "The End of the Main Sequence". The Astrophysical Journal 482 (1): 420–432. Bibcode:1997ApJ...482..420L. doi:10.1086/304125.
- Imamura, James N. (1995-02-07). "Mass-Luminosity Relationship". University of Oregon. Archived from the original on December 14, 2006. Retrieved 2007-01-08.
- Fynbo, Hans O. U. et al. (2004). "Revised rates for the stellar triple-α process from measurement of 12C nuclear resonances". Nature 433 (7022): 136–139. doi:10.1038/nature03219. PMID 15650733.
- Sitko, Michael L. (2000-03-24). "Stellar Structure and Evolution". University of Cincinnati. Retrieved 2007-12-05.
- Staff (2006-10-12). "Post-Main Sequence Stars". Australia Telescope Outreach and Education. Retrieved 2008-01-08.
- Girardi, L.; Bressan, A.; Bertelli, G.; Chiosi, C. (2000). "Evolutionary tracks and isochrones for low- and intermediate-mass stars: From 0.15 to 7 Msun, and from Z=0.0004 to 0.03". Astronomy and Astrophysics Supplement 141 (3): 371–383. arXiv:astro-ph/9910164. Bibcode:2000A&AS..141..371G. doi:10.1051/aas:2000126.
- Poelarends, A. J. T.; Herwig, F.; Langer, N.; Heger, A. (March 2008). "The Supernova Channel of Super-AGB Stars". The Astrophysical Journal 675 (1): 614–625. arXiv:0705.4643. Bibcode:2008ApJ...675..614P. doi:10.1086/520872.
- Krauss, Lawrence M.; Chaboyer, Brian (2003). "Age Estimates of Globular Clusters in the Milky Way: Constraints on Cosmology". Science 299 (5603): 65–69. Bibcode:2003Sci...299...65K. doi:10.1126/science.1075631. PMID 12511641.
- A java based applet for stellar evolution.
- Charity, Mitchell (2001-06-04). "What color are the stars?". Vendian Systems. Retrieved 2008-11-26. | http://en.wikipedia.org/wiki/Main_sequence | 13 |
51 | Bernoulli vs. Newton
Some teachers are adamant that airplanes fly because the pressure above the wing is reduced due to the Bernoulli effect.
Others are equally adamant that airplanes fly because wings deflect air downward so that in reaction the plane is forced upward.
Many of the supporters of one of these points of view believe that the other point of view is wrong.
Yet both points of view are correct.
Bernoulli's theorem is just a statement of the law of conservation of energy and so it is true for airplane wings.
Newton's law of action and reaction is also true in every case including airplane wings.
It is interesting that while both of these theories is true, neither is used by airplane wing designers to compute the lift of a wing. They use a theory in which lift is due to circulation of air about the wing.
This is Bernoulli's theorem in its simplest form::
When the speed of a fluid increases the pressure decreases.
Notice that this is a statement about the relationship between the change in speed of a fluid parcel and the change in its pressure.
Measurements made by attaching pressure sensors to the top and bottom of a wing show that the pressure is reduced at the top of the wing and that the pressure either remains at atmospheric pressure, or is above atmospheric pressure, on the bottom of the wing depending on the angle of the wing with respect to the direction of motion of the wing through the air, the angle of attack.
See the Bernoulli Bottle exploration. (Coming Soon.)
Here is a case study using Bernoulli's equation.
Air flows over an airplane wing that is convex on the top and flat below.
The flat bottom of the wing is parallel to the velocity of the airplane.
The air a great distance in front of the airplane is at atmospheric pressure.
The air speeds up as it flows over the top of the wing.
Why? The same amount of air must flow through every cross sectional area so when the air flows through a smaller area over the top of the wing it must speed up. (This is due to the conservation of mass flow, no air is created or destroyed.)
When the air in front of the wing speeds up as it passes over the wing its pressure must drop. Thus the air flowing over the wing has lower than atmospheric pressure.
The air flowing along the bottom of the wing travels at the same speed and so remains at atmospheric pressure.
The combination of atmospheric pressure below the wing and lower pressure above leads to a net upward force called lift.
Bernoulli Math Root
Start with the law of conservation of energy:
The sum of the kinetic and potential energies of a mass at one place is equal to the sum at another place plus the work done on the mass between the two points.
KE1 +PE1 = W +KE2 +PE2
1/2 mv22 + mgh1 = integral of F dx +1/2 mv22 + mgh2
Where m is mass, h is height, v is velocity, g is the acceleration of gravity, F is the external net force on the mass, and dx is the distance moved.
Let's keep everything at the same height so that h1 = h2 and so we can drop the potential energy terms.
We'll also talk about a parcel of air with volume V and divide every term by V.
1/2 m/V v22 = (F2-F1) Dx/V +1/2 m/V v22
Now m/V is density r
and the Volume V is the cross sectional area, A, of the air parcel times its length dx
(F2-F1) Dx/V is P2 - P1
where P is the pressure F/A
so we have
1/2 r v22 = (P2-P1) +1/2 r v22
P1 + 1/2 r v22 = P2 +1/2 r v22
Photographs show that when a flying wing collides with air the air is defected downward. The downward force on the air is associated with an upward reaction force on the wing called lift.
A flat plate wing like that on a paper airplane will produce lift if it has an angle of attack. That is, if it collides with the air so that the air hits the bottom of the wing. The collision of the air with the bottom of the wing deflects the air down and produces lift.
A wing with an airfoil shape curve on top and flat surface on the bottom will produce more lift than a flat wing. Indeed, more air is deflected down by the top of the wing than by the bottom.
You can see that more air is deflected by the top of the wing than by the bottom particularly well when the wing has zero angle of attack. In this case the bottom of the wing deflects no air, and yet the wing still produces lift because the top is deflecting air downward.
The downward force on the air is equal and opposite to the upward force on the wing.
Newton Math Root
The downward force on the air is given by Newton's law F = ma
which is also F = dp/dt
where p is momentum, mv, and t is time.
From the point of view of the wing the air approaches the wing parallel to the ground, with a momentum parallel to the ground. After colliding with the wing the air travels downward, with a momentum toward the ground. The change in momentum of the air is directed downward. The force on the air is downward.
The force is thus:
F = dp/dt = dmv/dt
where the mass of air is the density, r, of air times the volume, V, deflected
m = r*V
The volume of air deflected is the cross sectional area of the air deflected times the length of the air parcel deflected in the time interval dt. This is:
m = r*A*v*dt
so F = r*A*v*v = r*A*v2
In both cases the lifting force increases as the speed of the air squared.
How to use the Bernoulli equation.
To use the Bernoulli equation you must:
Point to two different positions along a streamline.
A streamline is the path followed by a parcel of air.
Once you have pointed to the two positions you can comment on the change in the pressure between the two positions if you know the differences in the speed of the parcel at the two positions.
You can alternatively learn the change in speed if you know the change in pressure.
Consider a pitot-static tube otherwise known as a pitot tube. This is used to measure the speed of an airplane through the air. It is a tube that sticks out in front of the airplane with a hole in its end and several holes along its sides.
In the frame of reference of the airplane, the airplane is stationary and the air is moving past it.
Far ahead of the airplane the air is at atmospheric pressure.
The air collides with the front of the pitot tube and comes to rest. Since the air has slowed down its pressure increases. The increase in air pressure at the front of the pitot tube is proportional to the speed of the air relative to the plane squared!
The air flows past the sides of the pitot tube at the same speed as the distant air. It has no change in speed and so no change in pressure. It is at atmospheric pressure.
The difference in these two pressures can be used to calculate the speed of the plane.
A common mistake: The air rushing by the sides of the pitot tube does not have less pressure because of its speed relative to the surface of the tube! The Bernoulli equation gives the change in pressure due to changes in velocity along a streamline not due to relative velocity with respect to a surface.
The Curve Ball that Curves the Wrong Way!
Consider a spinning baseball moving from the pitcher to the batter. We have a right handed pitcher and a right handed batter. Viewed from above the ball is rotating counterclockwise.
This means that as the ball crosses the plate the side of the ball near the batter has a higher airspeed than the side opposite the batter.
Yet the pressure on the baseball is lower on the side away from the batter and the ball is curving away from the batter. This is because the lowering of pressure due to Bernoulli's equation is not due to the relative velocity of the air and the surface of the ball!
It is due to the change in the speed of the air along a streamline.
NASA has a good site about the lift of airplane wings. (Site) | http://www.exo.net/~pauld/physics/bernoulli.html | 13 |
134 | Introduction to special relativity
||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
|Part of a series on|
In physics, special relativity is a fundamental theory concerning space and time, developed by Albert Einstein in 1905 as a modification of Galilean relativity. (See "History of special relativity" for a detailed account and the contributions of Hendrik Lorentz and Henri Poincaré.) The theory was able to explain some pressing theoretical and experimental issues in the physics of the time involving light and electrodynamics, such as the failure of the 1887 Michelson–Morley experiment, which aimed to measure differences in the relative speed of light due to the Earth's motion through the hypothetical, and now discredited, luminiferous aether. The aether was then considered to be the medium of propagation of electromagnetic waves such as light.
Einstein postulated that the speed of light in free space is the same for all observers, regardless of their motion relative to the light source, where we may think of an observer as an imaginary entity with a sophisticated set of measurement devices, at rest with respect to itself, that perfectly record the positions and times of all events in space and time. This postulate stemmed from the assumption that Maxwell's equations of electromagnetism, which predict a specific speed of light in a vacuum, hold in any inertial frame of reference rather than, as was previously believed, just in the frame of the aether. This prediction contradicted the laws of classical mechanics, which had been accepted for centuries, by arguing that time and space are not fixed and in fact change to maintain a constant speed of light regardless of the relative motions of sources and observers. Einstein's approach was based on thought experiments, calculations, and the principle of relativity, which is the notion that all physical laws should appear the same (that is, take the same basic form) to all inertial observers. Today, scientists are so comfortable with the idea that the speed of light is always the same that the metre is now defined as "the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second." This means that the speed of light is by convention 299 792 458 m/s (approximately 1.079 billion kilometres per hour, or 671 million miles per hour).
The predictions of special relativity are almost identical to those of Galilean relativity for most everyday phenomena, in which speeds are much lower than the speed of light, but it makes different, non-obvious predictions for objects moving at very high speeds. These predictions have been experimentally tested on numerous occasions since the theory's inception and were confirmed by those experiments. The major predictions of special relativity are:
- Relativity of simultaneity: Observers who are in motion with respect to each other may disagree on whether two events occurred at the same time or one occurred before the other.
- Time dilation (An observer watching two identical clocks, one moving and one at rest, will measure the moving clock to tick more slowly)
- Length contraction (Along the direction of motion, a rod moving with respect to an observer will be measured to be shorter than an identical rod at rest), and
- The equivalence of mass and energy (written as E = mc2).
Special relativity predicts a non-linear velocity addition formula which prevents speeds greater than that of light from being observed. In 1908, Hermann Minkowski reformulated the theory based on different postulates of a more geometrical nature. This approach considers space and time as being different components of a single entity, the spacetime, which is "divided" in different ways by observers in relative motion. Likewise, energy and momentum are the components of the four-momentum, and the electric and magnetic field are the components of the electromagnetic tensor.
As Galilean relativity is now considered an approximation of special relativity valid for low speeds, special relativity is considered an approximation of the theory of general relativity valid for weak gravitational fields. General relativity postulates that physical laws should appear the same to all observers (an accelerating frame of reference being equivalent to one in which a gravitational field acts), and that gravitation is the effect of the curvature of spacetime caused by energy (including mass).
Reference frames and Galilean relativity: a classical prelude
A reference frame is simply a selection of what constitutes a stationary object. Once the velocity of a certain object is arbitrarily defined to be zero, the velocity of everything else in the universe can be measured relative to that object.[Note 1]
One oft-used example is the difference in measurements of objects on a train as made by an observer on the train compared to those made by one standing on a nearby platform as it passes.
Consider the seats on the train car in which the passenger observer is sitting.
The distances between these objects and the passenger observer do not change. Therefore, this observer measures all of the seats to be at rest, since he is stationary from his own perspective.
The observer standing on the platform would see exactly the same objects but interpret them very differently. The distances between themself and the seats on the train car are changing, and so they conclude that they are moving forward, as is the whole train. Thus for one observer the seats are at rest, while for the other the seats are moving, and both are correct, since they are using different definitions of "at rest" and "moving". Each observer has a distinct "frame of reference" in which velocities are measured, the rest frame of the platform and the rest frame of the train – or simply the platform frame and the train frame.
Why can't we select one of these frames to be the "correct" one? Or more generally, why is there not a frame we can select to be the basis for all measurements, an "absolutely stationary" frame?
Aristotle imagined the Earth lying at the centre of the universe (the geocentric model), unmoving as other objects moved about it. In this worldview, one could select the surface of the Earth as the absolute frame. However, as the geocentric model was challenged and finally fell in the 1500s, it was realised that the Earth was not stationary at all, but both rotating on its axes as well as orbiting the Sun. In this case the Earth is clearly not the absolute frame. But perhaps there is some other frame one could select, perhaps the Sun's?
Galileo challenged this idea and argued that the concept of an absolute frame, and thus absolute velocity, was unreal; all motion was relative. Galileo gave the common-sense "formula" for adding velocities: if
- particle P is moving at velocity v with respect to reference frame A and
- reference frame A is moving at velocity u with respect to reference frame B, then
- the velocity of P with respect to B is given by v + u.
In modern terms, we expand the application of this concept from velocity to all physical measurements – according to what we now call the Galilean transformation, there is no absolute frame of reference. An observer on the train has no measurement that distinguishes whether the train is moving forward at a constant speed, or the platform is moving backwards at that same speed. The only meaningful statement is that the train and platform are moving relative to each other, and any observer can choose to define what constitutes a speed equal to zero. When considering trains moving by platforms it is generally convenient to select the frame of reference of the platform, but such a selection would not be convenient when considering planetary motion and is not intrinsically more valid.
One can use this formula to explore whether or not any possible measurement would remain the same in different reference frames. For instance, if the passenger on the train threw a ball forward, he would measure one velocity for the ball, and the observer on the platform another. After applying the formula above, though, both would agree that the velocity of the ball is the same once corrected for a different choice of what speed is considered zero. This means that motion is "invariant". Laws of classical mechanics, like Newton's second law of motion, all obey this principle because they have the same form after applying the transformation. As Newton's law involves the derivative of velocity, any constant velocity added in a Galilean transformation to a different reference frame contributes nothing (the derivative of a constant is zero).
This means that the Galilean transformation and the addition of velocities only apply to frames that are moving at a constant velocity. Since objects tend to retain their current velocity due to a property we call inertia, frames that refer to objects with constant speed are known as inertial reference frames. The Galilean transformation, then, does not apply to accelerations, only velocities, and classical mechanics is not invariant under acceleration. This mirrors the real world, where acceleration is easily distinguishable from smooth motion in any number of ways. For example, if an observer on a train saw a ball roll backward off a table, he would be able to infer that the train was accelerating forward, since the ball remains at rest unless acted upon by an external force. Therefore, the only explanation is that the train has moved underneath the ball, resulting in an apparent motion of the ball. Addition of a time-varying velocity, corresponding to an accelerated reference frame, changed the formula (see pseudo-force).
Both the Aristotelian and Galilean views of motion contain an important assumption. Motion is defined as the change of position over time, but both of these quantities, position and time, are not defined within the system. It is assumed, explicitly in the Greek worldview, that space and time lie outside physical existence and are absolute even if the objects within them are measured relative to each other. The Galilean transformations can only be applied because both observers are assumed to be able to measure the same time and space, regardless of their frames' relative motions. So in spite of there being no absolute motion, it is assumed there is some, perhaps unknowable, absolute space and time.
Classical physics and electromagnetism
Through the era between Newton and around the start of the 20th century, the development of classical physics had made great strides. Newton's application of the inverse square law to gravity was the key to unlocking a wide variety of physical events, from heat to light, and calculus made the direct calculation of these effects tractable. Over time, new mathematical techniques, notably the Lagrangian, greatly simplified the application of these physical laws to more complex problems.
As electricity and magnetism were better explored, it became clear that the two concepts were related. Over time, this work culminated in Maxwell's equations, a set of four equations that could be used to calculate the entirety of electromagnetism. One of the most interesting results of the application of these equations was that it was possible to construct a self-sustaining wave of electrical and magnetic fields that could propagate through space. When reduced, the math demonstrated that the speed of propagation was dependent on two universal constants, and their ratio was the speed of light. Light was an electromagnetic wave.
Under the classic model, waves are displacements within a medium. In the case of light, the waves were thought to be displacements of a special medium known as the luminiferous aether, which extended through all space. This being the case, light travels in its own frame of reference, the frame of the aether. According to the Galilean transform, we should be able to measure the difference in velocities between the aether's frame and any other – a universal frame at last.
Designing an experiment to actually carry out this measurement proved very difficult, however, as the speeds and timing involved made accurate measurement difficult. The measurement problem was eventually solved with the Michelson–Morley experiment. To everyone's surprise, no relative motion was seen. Either the aether was travelling at the same velocity as the Earth, difficult to imagine given the Earth's complex motion, or there was no aether. Follow-up experiments tested various possibilities, and by the start of the 20th century it was becoming increasingly difficult to escape the conclusion that the aether did not exist.
These experiments all showed that light simply did not follow the Galilean transformation. And yet it was clear that physical objects emitted light, which led to unsolved problems. If one were to carry out the experiment on the train by "throwing light" instead of balls, if light does not follow the Galilean transformation then the observers should not agree on the results. Yet it was apparent that the universe disagreed; physical systems known to be at great speeds, like distant stars, had physics that were as similar to our own as measurements allowed. Some sort of transformation had to be acting on light, or better, a single transformation for both light and matter.
The development of a suitable transformation to replace the Galilean transformation is the basis of special relativity.
Invariance of length: the Euclidean picture
In special relativity, space and time are joined into a unified four-dimensional continuum called spacetime. To gain a sense of what spacetime is like, we must first look at the Euclidean space of classical Newtonian physics. This approach to explaining the theory of special relativity begins with the concept of "length".
In everyday experience, it seems that the length of objects remains the same no matter how they are rotated or moved from place to place; as a result the simple length of an object doesn't appear to change or is invariant. However, as is shown in the illustrations below, what is actually being suggested is that length seems to be invariant in a three-dimensional coordinate system.
One of the basic theorems of vector algebra is that the length of a vector does not change when it is rotated. However, a closer inspection tells us that this is only true if we consider rotations confined to the plane. If we introduce rotation in the third dimension, then we can tilt the line out of the plane. In this case the projection of the line on the plane will get shorter. Does this mean the line's length changes? – obviously not. The world is three-dimensional and in a 3D Cartesian coordinate system the length is given by the three-dimensional version of Pythagoras's theorem:
This is invariant under all rotations. The apparent violation of invariance of length only happened because we were "missing" a dimension. It seems that, provided all the directions in which an object can be tilted or arranged are represented within a coordinate system, the length of an object does not change under rotations. With time and space considered to be outside the realm of physics itself, under classical mechanics a 3-dimensional coordinate system is enough to describe the world.
Note that invariance of length is not ordinarily considered a principle or law, not even a theorem. It is simply a statement about the fundamental nature of space itself. Space as we ordinarily conceive it is called a three-dimensional Euclidean space, because its geometrical structure is described by the principles of Euclidean geometry. The formula for distance between two points is a fundamental property of a Euclidean space, it is called the Euclidean metric tensor (or simply the Euclidean metric). In general, distance formulas are called metric tensors.
Note that rotations are fundamentally related to the concept of length. In fact, one may define length or distance to be that which stays the same (is invariant) under rotations, or define rotations to be that which keep the length invariant. Given any one, it is possible to find the other. If we know the distance formula, we can find out the formula for transforming coordinates in a rotation. If, on the other hand, we have the formula for rotations then we can find out the distance formula.
The Minkowski formulation: introduction of spacetime
After Einstein derived special relativity formally from the (at first sight counter-intuitive) assumption that the speed of light is the same to all observers, Hermann Minkowski built on mathematical approaches used in non-euclidean geometry and on the mathematical work of Lorentz and Poincaré. Minkowski showed in 1908 that Einstein's new theory could also be explained by replacing the concept of a separate space and time with a four-dimensional continuum called spacetime. This was a groundbreaking concept, and Roger Penrose has said that relativity was not truly complete until Minkowski reformulated Einstein's work.
The concept of a four-dimensional space is hard to visualise. It may help at the beginning to think simply in terms of coordinates. In three-dimensional space, one needs three real numbers to refer to a point. In the Minkowski space, one needs four real numbers (three space coordinates and one time coordinate) to refer to a point at a particular instant of time. This point, specified by the four coordinates, is called an event. The distance between two different events is called the spacetime interval.
A path through the four-dimensional spacetime (usually known as Minkowski space) is called a world line. Since it specifies both position and time, a particle having a known world line has a completely determined trajectory and velocity. This is just like graphing the displacement of a particle moving in a straight line against the time elapsed. The curve contains the complete motional information of the particle.
In the same way as the measurement of distance in 3D space needed all three coordinates, we must include time as well as the three space coordinates when calculating the distance in Minkowski space (henceforth called M). In a sense, the spacetime interval provides a combined estimate of how far apart two events occur in space as well as the time that elapses between their occurrence.
But there is a problem; time is related to the space coordinates, but they are not equivalent. Pythagoras' theorem treats all coordinates on an equal footing (see Euclidean space for more details). We can exchange two space coordinates without changing the length, but we can not simply exchange a space coordinate with time – they are fundamentally different. It is an entirely different thing for two events to be separated in space and to be separated in time. Minkowski proposed that the formula for distance needed a change. He found that the correct formula was actually quite simple, differing only by a sign from Pythagoras' theorem:
where c is a constant and t is the time coordinate.[Note 2] Multiplication by c, which has the dimensions L T −1, converts the time to units of length and this constant has the same value as the speed of light. So the spacetime interval between two distinct events is given by
There are two major points to be noted. Firstly, time is being measured in the same units as length by multiplying it by a constant conversion factor. Secondly, and more importantly, the time-coordinate has a different sign than the space coordinates. This means that in the four-dimensional spacetime, one coordinate is different from the others and influences the distance differently. This new "distance" may be zero or even negative. This new distance formula, called the metric of the spacetime, is at the heart of relativity. This distance formula is called the metric tensor of M. This minus sign means that a lot of our intuition about distances can not be directly carried over into spacetime intervals. For example, the spacetime interval between two events separated both in time and space may be zero (see below). From now on, the terms distance formula and metric tensor will be used interchangeably, as will be the terms Minkowski metric and spacetime interval.
In Minkowski spacetime the spacetime interval is the invariant length, the ordinary 3D length is not required to be invariant. The spacetime interval must stay the same under rotations, but ordinary lengths can change. Just like before, we were missing a dimension. Note that everything thus far is merely definitions. We define a four-dimensional mathematical construct which has a special formula for distance, where distance means that which stays the same under rotations (alternatively, one may define a rotation to be that which keeps the distance unchanged).
Now comes the physical part. Rotations in Minkowski space have a different interpretation than ordinary rotations. These rotations correspond to transformations of reference frames. Passing from one reference frame to another corresponds to rotating the Minkowski space. An intuitive justification for this is given below, but mathematically this is a dynamical postulate just like assuming that physical laws must stay the same under Galilean transformations (which seems so intuitive that we don't usually recognise it to be a postulate).
Since by definition rotations must keep the distance same, passing to a different reference frame must keep the spacetime interval between two events unchanged. This requirement can be used to derive an explicit mathematical form for the transformation that must be applied to the laws of physics (compare with the application of Galilean transformations to classical laws) when shifting reference frames. These transformations are called the Lorentz transformations. Just like the Galilean transformations are the mathematical statement of the principle of Galilean relativity in classical mechanics, the Lorentz transformations are the mathematical form of Einstein's principle of relativity. Laws of physics must stay the same under Lorentz transformations. Maxwell's equations and Dirac's equation satisfy this property, and hence they are relativistically correct laws (but classically incorrect, since they don't transform correctly under Galilean transformations).
With the statement of the Minkowski metric, the common name for the distance formula given above, the theoretical foundation of special relativity is complete. The entire basis for special relativity can be summed up by the geometric statement "changes of reference frame correspond to rotations in the 4D Minkowski spacetime, which is defined to have the distance formula given above". The unique dynamical predictions of SR stem from this geometrical property of spacetime. Special relativity may be said to be the physics of Minkowski spacetime. In this case of spacetime, there are six independent rotations to be considered. Three of them are the standard rotations on a plane in two directions of space. The other three are rotations in a plane of both space and time: These rotations correspond to a change of velocity, and the Minkowski diagrams devised by him describe such rotations.
As has been mentioned before, one can replace distance formulas with rotation formulas. Instead of starting with the invariance of the Minkowski metric as the fundamental property of spacetime, one may state (as was done in classical physics with Galilean relativity) the mathematical form of the Lorentz transformations and require that physical laws be invariant under these transformations. This makes no reference to the geometry of spacetime, but will produce the same result. This was in fact the traditional approach to SR, used originally by Einstein himself. However, this approach is often considered to offer less insight and be more cumbersome than the more natural Minkowski formalism.
Reference frames and Lorentz transformations: relativity revisited
|This section does not cite any references or sources. (March 2013)|
Changes in reference frame, represented by velocity transformations in classical mechanics, are represented by rotations in Minkowski space. These rotations are called Lorentz transformations. They are different from the Galilean transformations because of the unique form of the Minkowski metric. The Lorentz transformations are the relativistic equivalent of Galilean transformations. Laws of physics, in order to be relativistically correct, must stay the same under Lorentz transformations. The physical statement that they must be the same in all inertial reference frames remains unchanged, but the mathematical transformation between different reference frames changes. Newton's laws of motion are invariant under Galilean rather than Lorentz transformations, so they are immediately recognisable as non-relativistic laws and must be discarded in relativistic physics. The Schrödinger equation is also non-relativistic.
Maxwell's equations are written using vectors and at first glance appear to transform correctly under Galilean transformations. But on closer inspection, several questions are apparent that can not be satisfactorily resolved within classical mechanics (see History of special relativity). They are indeed invariant under Lorentz transformations and are relativistic, even though they were formulated before the discovery of special relativity. Classical electrodynamics can be said to be the first relativistic theory in physics. To make the relativistic character of equations apparent, they are written using four-component vector-like quantities called four-vectors. Four-vectors transform correctly under Lorentz transformations, so equations written using four-vectors are inherently relativistic. This is called the manifestly covariant form of equations. Four-vectors form a very important part of the formalism of special relativity.
Einstein's postulate: the constancy of the speed of light
Einstein's postulate that the speed of light is a constant comes out as a natural consequence of the Minkowski formulation.
- When an object is travelling at c in a certain reference frame, the spacetime interval is zero.
- The spacetime interval between the origin-event (0,0,0,0) and an event (x,y,z,t) is
- The distance travelled by an object moving at velocity v for t seconds is:
- Since the velocity v equals c we have
- Hence the spacetime interval between the events of departure and arrival is given by
- An object travelling at c in one reference frame is travelling at c in all reference frames.
- Let the object move with velocity v when observed from a different reference frame. A change in reference frame corresponds to a rotation in M. Since the spacetime interval must be conserved under rotation, the spacetime interval must be the same in all reference frames. In proposition 1 we showed it to be zero in one reference frame, hence it must be zero in all other reference frames. We get that
- which implies
The paths of light rays have a zero spacetime interval, and hence all observers will obtain the same value for the speed of light. Therefore, when assuming that the universe has four dimensions that are related by Minkowski's formula, the speed of light appears as a constant, and does not need to be assumed (postulated) to be constant as in Einstein's original approach to special relativity.
Clock delays and rod contractions: more on Lorentz transformations
Another consequence of the invariance of the spacetime interval is that clocks will appear to go slower on objects that are moving relative to the observer. This is very similar to how the 2D projection of a line rotated into the third-dimension appears to get shorter. Length is not conserved simply because we are ignoring one of the dimensions. Let us return to the example of John and Bill.
John observes the length of Bill's spacetime interval as:
whereas Bill doesn't think he has traveled in space, so writes:
The spacetime interval, s2, is invariant. It has the same value for all observers, no matter who measures it or how they are moving in a straight line. This means that Bill's spacetime interval equals John's observation of Bill's spacetime interval so:
So, if John sees a clock that is at rest in Bill's frame record one second, John will find that his own clock measures between these same ticks an interval t, called coordinate time, which is greater than one second. It is said that clocks in motion slow down, relative to those on observers at rest. This is known as "relativistic time dilation of a moving clock". The time that is measured in the rest frame of the clock (in Bill's frame) is called the proper time of the clock.
In special relativity, therefore, changes in reference frame affect time also. Time is no longer absolute. There is no universally correct clock; time runs at different rates for different observers.
Similarly it can be shown that John will also observe measuring rods at rest on Bill's planet to be shorter in the direction of motion than his own measuring rods.[Note 3] This is a prediction known as "relativistic length contraction of a moving rod". If the length of a rod at rest on Bill's planet is X, then we call this quantity the proper length of the rod. The length x of that same rod as measured on John's planet, is called coordinate length, and given by
These two equations can be combined to obtain the general form of the Lorentz transformation in one spatial dimension:
where the Lorentz factor is given by
The above formulas for clock delays and length contractions are special cases of the general transformation.
Alternatively, these equations for time dilation and length contraction (here obtained from the invariance of the spacetime interval), can be obtained directly from the Lorentz transformation by setting X = 0 for time dilation, meaning that the clock is at rest in Bill's frame, or by setting t = 0 for length contraction, meaning that John must measure the distances to the end points of the moving rod at the same time.
A consequence of the Lorentz transformations is the modified velocity-addition formula:
Simultaneity and clock desynchronisation
|This section requires expansion. (October 2007)|
The last consequence of Minkowski's spacetime is that clocks will appear to be out of phase with each other along the length of a moving object. This means that if one observer sets up a line of clocks that are all synchronised so they all read the same time, then another observer who is moving along the line at high speed will see the clocks all reading different times. This means that observers who are moving relative to each other see different events as simultaneous. This effect is known as "Relativistic Phase" or the "Relativity of Simultaneity". Relativistic phase is often overlooked by students of special relativity, but if it is understood, then phenomena such as the twin paradox are easier to understand.
Observers have a set of simultaneous events around them that they regard as composing the present instant. The relativity of simultaneity results in observers who are moving relative to each other having different sets of events in their present instant.
The net effect of the four-dimensional universe is that observers who are in motion relative to you seem to have time coordinates that lean over in the direction of motion, and consider things to be simultaneous that are not simultaneous for you. Spatial lengths in the direction of travel are shortened, because they tip upwards and downwards, relative to the time axis in the direction of travel, akin to a skew or shear of three-dimensional space.
Great care is needed when interpreting spacetime diagrams. Diagrams present data in two dimensions, and cannot show faithfully how, for instance, a zero length spacetime interval appears.
General relativity: a peek forward
Unlike Newton's laws of motion, relativity is not based upon dynamical postulates. It does not assume anything about motion or forces. Rather, it deals with the fundamental nature of spacetime. It is concerned with describing the geometry of the backdrop on which all dynamical phenomena take place. In a sense therefore, it is a meta-theory, a theory that lays out a structure that all other theories must follow. In truth, special relativity is only a special case. It assumes that spacetime is flat. That is, it assumes that the structure of Minkowski space and the Minkowski metric tensor is constant throughout. In general relativity, Einstein showed that this is not true. The structure of spacetime is modified by the presence of matter. Specifically, the distance formula given above is no longer generally valid except in space free from mass. However, just like a curved surface can be considered flat in the infinitesimal limit of calculus, a curved spacetime can be considered flat at a small scale. This means that the Minkowski metric written in the differential form is generally valid.
One says that the Minkowski metric is valid locally, but it fails to give a measure of distance over extended distances. It is not valid globally. In fact, in general relativity the global metric itself becomes dependent on the mass distribution and varies through space. The central problem of general relativity is to solve the famous Einstein field equations for a given mass distribution and find the distance formula that applies in that particular case. Minkowski's spacetime formulation was the conceptual stepping stone to general relativity. His fundamentally new outlook allowed not only the development of general relativity, but also to some extent quantum field theories.
Mass–energy equivalence: sunlight and atom bombs
|This section requires expansion. (October 2007)|
Einstein showed that mass is simply another form of energy. The energy equivalent of rest mass m is mc2. This equivalence implies that mass should be interconvertible with other forms of energy. This is the basic principle behind atom bombs and production of energy in nuclear reactors and stars (like the Sun).
|This section requires expansion. (September 2007)|
There is a common perception that relativistic physics is not needed for practical purposes or in everyday life. This is not true. Without relativistic effects, gold would look silvery, rather than yellow. Many technologies are critically dependent on relativistic physics:
- Cathode ray tubes,
- Particle accelerators,
- Global Positioning System (GPS) – although this really requires the full theory of general relativity
The postulates of special relativity
Einstein developed special relativity on the basis of two postulates:
- First postulate – Special principle of relativity – The laws of physics are the same in all inertial frames of reference. In other words, there are no privileged inertial frames of reference.
- Second postulate – Invariance of c – The speed of light in a vacuum is independent of the motion of the light source.
Special relativity can be derived from these postulates, as was done by Einstein in 1905. Einstein's postulates are still applicable in the modern theory but the origin of the postulates is more explicit. It was shown above how the existence of a universally constant velocity (the speed of light) is a consequence of modeling the universe as a particular four-dimensional space having certain specific properties. The principle of relativity is a result of Minkowski structure being preserved under Lorentz transformations, which are postulated to be the physical transformations of inertial reference frames.
See also
- The mass of objects and systems of objects has a complex interpretation in special relativity, see relativistic mass.
- "Minkowski also shared Poincaré's view of the Lorentz transformation as a rotation in a four-dimensional space with one imaginary coordinate, and his five four-vector expressions." (Walter 1999).
- There exists a more technical but mathematically convenient description of reference frames. A reference frame may be considered to be an identification of points in space at different times. That is, it is the identification of space points at different times as being the same point. This concept, particularly useful in making the transition to relativistic spacetime, is described in the language of affine space by VI Arnold in Mathematical Methods in Classical Mechanics, and in the language of fibre bundles by Roger Penrose in The Road to Reality.
- Originally Minkowski tried to make his formula look like Pythagoras's theorem by introducing the concept of imaginary time and writing −1 as i2. But Wilson, Gilbert, Borel and others proposed that this was unnecessary and introduced real time with the assumption that, when comparing coordinate systems, the change of spatial displacements with displacements in time can be negative. This assumption is expressed in differential geometry using a metric tensor that has a negative coefficient.
- It should also be made clear that the length contraction result only applies to rods aligned in the direction of motion. At right angles to the direction of motion, there is no contraction.
||This article lacks information such as ISBNs for the books listed in it.|
- "On the Electrodynamics of Moving Bodies". (fourmilab.ch web site): Translation from the German article: "Zur Elektrodynamik bewegter Körper", Annalen der Physik. 17:891-921. (June 30, 1905)
- Peter Gabriel Bergmann (1976). Introduction to the Theory of Relativity. Reprint of first edition of 1942 with a forward by A. Einstein. Courier Dover Publications. pp. xi. ISBN 0-486-63282-2.
- "Définition du mètre[[Category:Articles containing French language text]]". Résolution 1 de la 17e réunion de la CGPM (in French). Sèvres: Bureau International des Poids et Mesures. 1983. Retrieved 2008-10-03. "Le mètre est la longueur du trajet parcouru dans le vide par la lumière pendant une durée de 1/299 792 458 de seconde." Wikilink embedded in URL title (help) English translation: "Definition of the metre". Resolution 1 of the 17th meeting of the CGPM. Retrieved 2008-10-03.
- Tom Roberts and Siegmar Schleif (October 2007). "What is the experimental basis of Special Relativity?". Usenet Physics FAQ. Retrieved 2008-09-17.
- Minkowski, Hermann (1909), "Raum und Zeit", Physikalische Zeitschrift 10: 75–88
- Various English translations on Wikisource: Space and Time.
- Walter, S.(1999) The non-Euclidean style of Minkowskian relativity. The Symbolic Universe, J. Gray (ed.), Oxford University Press, 1999 http://www.univ-nancy2.fr/DepPhilo/walter/papers/nes.pdf
- Penrose, Roger (2004). The Road to Reality. Vintage. p. 406. ISBN [[Special:BookSources/978009944068|978009944068 [[Category:Articles with invalid ISBNs]]]] Check
- Einstein, Albert (2001). Relativity : the special and general theory. Authorised translation by Robert W. Lawson (Reprinted ed.). London: Routledge. p. 152. ISBN 978-0-415-25538-7. "It appears therefore more natural to think of physical reality as a four dimensional existence, instead of, as hitherto, the evolution of a three dimensional existence."
- Feynman, Richard P. (1999). Six not-so-easy pieces : Einstein's relativity, symmetry and space-time. London: Penguin Books. p. xiv. ISBN 978-0-14-027667-1. "The idea that the history of the universe should be viewed, physically, as a four-dimensional spacetime, rather than as a three dimensional space evolving with time is indeed fundamental to modern physics."
- Weyl, Hermann (1952) . Space, time, matter. (4th ed.). New York: Dover Books. ISBN 978-0-486-60267-7.: "The adequate mathematical formulation of Einstein's discovery was first given by Minkowski: to him we are indebted for the idea of four dimensional world-geometry, on which we based our argument from the outset."
- Thorne, Kip; Blandford, Roger. "Chapter 1: Physics in Euclidean Space and Flat Spacetime: Geometric Viewpoint" (pdf). Ph 136: Applications of classical physics. Caltech. "Special relativity is the limit of general relativity in the complete absence of gravity; its arena is flat, 4-dimensional Minkowski spacetime."
- Einstein, Albert (2001). Relativity : the special and general theory. Authorised translation by Robert W. Lawson (Reprinted ed.). London: Routledge. p. 152. ISBN 978-0-415-25538-7.
- "Relativity in Chemistry". Math.ucr.edu. Retrieved 2009-04-05.
|Wikibooks has a book on the topic of: Special Relativity|
Special relativity for a general audience (no math knowledge required)
- Einstein Light An award-winning, non-technical introduction (film clips and demonstrations) supported by dozens of pages of further explanations and animations, at levels with or without mathematics.
- Einstein Online Introduction to relativity theory, from the Max Planck Institute for Gravitational Physics.
Special relativity explained (using simple or more advanced math)
- Wikibooks: Special Relativity
- Albert Einstein. Relativity: The Special and General Theory. New York: Henry Holt 1920. BARTLEBY.COM, 2000
- Usenet Physics FAQ
- Greg Egan's Foundations
- A Primer on Special Relativity – MathPages
- Caltech Relativity Tutorial A basic introduction to concepts of Special and General Relativity, requiring only a knowledge of basic geometry.
- Special Relativity in film clips and animations from the University of New South Wales. | http://en.wikipedia.org/wiki/Introduction_to_special_relativity | 13 |
52 | Additional information available at Smoky Mountain Guide
HISTORY of the Great Smoky Mountains National Park
The first native peoples arrived in the Smokies in about AD 1000. They were believed to have been a breakaway of Iroquois, later to be called the Cherokee, who had moved south from Iroquois lands in New England. The Cherokee Nation stretched from the Ohio River into South Carolina and consisted of sevens clans. The Eastern Band of the Cherokee lived in the Smokies, the sacred ancestral home of the Cherokee Nation.
When the first white settlers reached the Great Smoky Mountains in the late 1700s they found themselves in the land of the Cherokee Indians. The tribe, one of the most culturally advanced on the continent, had permanent towns, cultivated croplands, and networks of trails leading to all parts of their territory.
In the late eighteenth century, Scotch-Irish, German, English, and other settlers arrived in significant numbers. The Cherokee were friendly at first but fought with settlers when provoked. They battled Carolina settlers in the 1760's but eventually withdrew to the Blue Ridge Mountains.
To settle with the newcomers, the Cherokee nation attempted to make treaties and to adapt to European customs. They adopted a written legal code in 1808 and instituted a Supreme Court two years later.
White settlers continued to occupy Cherokee land, and by 1819 the Cherokee were forced to cede a portion of their territory, which included the Great Smoky Mountains, to the United States. The discovery of gold in Northern Georgia in 1828 sounded the death knell for the Cherokee Nation.
In 1830, President Andrew Jackson signed the Removal Act, calling for the removal of all native people east of the Mississippi River to Indian Territory, now Oklahoma. The Cherokee appealed their case to the Supreme Court, and Chief Justice Marshall ruled in their favor. President Andrew Jackson, however, disregarded the Supreme Court decree in the one instance in America History when an U.S. president overtly ignored a Supreme Court decision.
The Cherokees had adopted the ways of the whites to the extent of developing a written language, printing their own newspaper, and utilizing the white man's agriculture and architecture. Nevertheless most of them were forcibly removed in the 1830s in a tragic episode known as the "Trail of Tears." About one-third of the Cherokee died en route of malnutrition and disease. Altogether, about 100,000 natives, including Cherokee, Seminole, Chickasaw and Choctaw, survived the march to Oklahoma, but thousands died along the way.
A handful of Cherokee disobeyed the government edict, however. Hiding out in the hills between Clingmans Dome and Mount Guyot, they managed to survive. The few who remained are the ancestors of the Cherokees living near the park today.
Earlier settlers had lived off the land by hunting the animals, utilizing the timber for buildings and fences, and growing food and pasturing animals in the clearings. As the decades passed, many areas that had once been forest became fields and pastures. People farmed, attended church, and maintained community ties in a typically rural fashion.
The agricultural pattern of life in the Great Smoky Mountains changed with the arrival of lumbering in the early 1900s. Within twenty years, the largely self-sufficient economy of the people here was almost replaced, by dependence on manufactured items, store-bought food and cash. At the same time, loggers were rapidly cutting the great primeval forests that remained on these mountains. Unless the course of events could be quickly changed, there would be little left of the region's special character.
The forest, at least the 20% that remained uncut within park boundaries were saved. The people, more than 1,200 landowners left the park. Behind them there remained over 70 structures, many farm buildings, schools, mills and churches. The great Smoky Mountains National Park now preserves the largest collection of historic log buildings in the East. Congress established the Great Smoky Mountains National Park in 1934. Land acquisition continued and in 1940, President Franklin Delano Roosevelt officially dedicated the several major highways lead to the Park.
The Cherokee Indians called this land Shaconage - "The place of the blue smoke." We know it today as the Great Smoky Mountains National Park, and it is one of America's great natural treasures.
Welcome to the Smoky Mountains
The Great Smoky Mountains are true "Mountain Magic." Few of life's experiences uplift the spirit more than these beautiful peaks and valleys. The "Land of Ten Thousand Smokes" as the Cherokees called the dancing wisps of clouds that populate the peaks and valleys, you'll want to head for the Sugarlands Visitors Center located 2 miles south of Gatlinburg on Route 441 just beside the National Park. It's filled with information racks, wildlife exhibits, wall maps, and Park personnel to answer questions.
The Four Season of the Great Smoky Mountains
The Great Smoky Mountains is a unique place in all seasons, the waterfalls, wildlife, grassy balds, sparkling rivers and forest add an undeniable quality to life in this beautiful area. Mountains, valleys, lakes and rivers offer a variety of outdoor recreation opportunities for all seasons. The Smoky Mountains region is blessed with four National parks, one national forest and eight parks.
Spring in the Great Smoky Mountains and its friendly towns starts with wildflowers blooming through the last snow in late February to early March, as daffodils appear in great profusion and forsythia blooms. By early April, redbuds and dogwood are everywhere, and the Dogwood Festival and Gatlinburg's Spring Wildflower Pilgrimage are in full swing. Then Dollywood opens the last weekend in April with a grand parade through Pigeon Forge.
Trout season opens in the springtime, too and of course it's picnic time again. For those who love shopping, Gatlinburg craft shops are open, and Pigeon Forge's many factory outlet malls are in full swing for the season (not that they are ever closed).
Other Great Smoky Mountain spring weekend events and activities in every town include Valentine's Day here in the wedding capital of the USA, plus Easter Vacation, the spring Rod Run and finally Memorial Day weekend on the threshold of summer.
Summer the magic of the Smokies is what everyone dreams about. From back country camping, hiking, fishing, and water sports of every sort, or the gala array of amusements from Dollywood, Ogles Water Park, and dozens of go-karts, slick rides, bumper cars, miniature golf courses, arcades and bungee jumping, its every recreation wish come true.
Where else could you tour a deep underground cavern, feed a deer, lunch beside a waterfall, view the mountains from the world's largest mountain cable tramway, ice skate and horseback riding is a special summer treat in the Smokies, with a dozen stables in towns, or built in the country and in the National Park offering rides for everyone.
Every summer day entertainment in the Smoky Mountains includes top-name celebrity concerts, country and bluegrass music, dinner theaters, rock'n'roll cafes, revivals, kid's shows, and live comedy.
For more memories, there are dozens of souvenir stores, art galleries, gift shops and country craft studios. And fantastic food at great restaurants. The Smokies country breakfast places are especially well known, and there's every kind of country cooking you can think of.
Fall the Smokies special magic may be the most famous and romantic in the autumn, from the warm days and frosty nights of Indian Summer or to the brilliant shows of red, yellow, orange and gold which splash the mountain sides and valleys as the leaves turn the region into a brilliant bright visual wonderland. Pumpkins and cornshucks in the fields and wood smoke tinting the air make fall in the Great Smoky Mountains a special season.
Fall is craft time in the Smokies, with special festivals, National Crafts Festival at Dollywood, Sevierville's Apple Festival of arts and foods, and the traditional Gatlinburg Craftsmen's Show in the Convention Center. You can watch artisans at shows or in their country studios making hand crafted works. There's broom making, candle dripping, intricate wood craving, quilting, pewter, and pottery. Plus, blacksmithing, weaving, sculpture and breathtaking watercolors by the artist in the country. There are toys of every description, leather goods, blankets, stained glass, glass blowing, lamps, and even painted saw's. Plus, homemade molasses, apple butter, bread mixes, smoked country hams, jams'n'jellies, fudge taffy, and much more.
Winter , from November through February, the cities of the Great Smoky Mountains now join together in a four-month celebration called Winterfest. This joyous season includes Thanksgiving, Christmas, and New Year's, plus special January and February winter celebrations.
Winterfest sees Sevierville and Pigeon Forge decked out in thousands of decorative lights, while Gatlinburg shines from end to end with "Smoky Mountain Lights" The Dollywood Theme Park glows nightly with more than two million sparkling lights in an old-fashioned atmosphere, including special Christmas shows and events.
The Smoky Mountains is the place for skiing and ice skating at Gatlinburg's famous Ober Gatlinburg Alpine Resort center at the top of the spectacular aerial tramway. Both natural and man-made snow on groomed slopes makes for great fun and sports and the large indoor ice rink is in use all year around.
The Great Smoky Mountains has truly become a four-season wonderland for all.
The Diversity of plant and animal life in the Smokies
The Smokies wide diversity of flowering plants and trees makes for a colorful spring, summer, and fall. The spring bloom starts in the valleys around April and works upward to the peaks through July, while the changing colors of the foliage start on the peaks as early as mid-August and works downward to the valley into October.
The Great Smoky Mountains National Park is one of the largest protected areas of land areas east of the Rocky Mountains. The Smokies has a biological diversity that is paralleled in the United States. Due to the many climates found in the Smokies, the park is home of 1,500 species of vascular plants, 2,000 species of mushrooms, 125 different species of trees, 10 percent of which are considered rare. Their area over 4,000-non flowering plant species in the park. With abundant sunshine and frequent rainfall it is no surprise that about 200 species of showy wildflowers bloom in the Smokies. They begin in March and last till late November. One forth of the park's 500,000 acres is undisturbed old growth forest. More tree species than in all of northern Europe live in this small area of Tennessee and North Carolina. The list is long of the 60 native mammals and over 200 species of birds that live in the park. In addition, the park is haven for 27 species of salamanders, making it the most diverse salamander population in the world.
The Smoky Mountains are known for springtime flowers, including the trillium, phacelia, violets, lady's slippers, jack in the pulpits, and snowy orchids. The Dogwood blooms in late April, spring flowers late March to Mid May, mountain laurel and flame azalea, May and June, the Catawaba rhododendron blooms in Mid June and the rosebay rhododendron in June and July. In August you may see wild clematis, yellow fringed orchis, bee balm, cardinal flower, monkshood, and blue gentian.
The goldenrod, ironweed, and asters bloom in late September to early October. Many flowers grow along park roadsides. Other good locations to see them along quite walkways and on designated nature trails throughout the parkway.
The Smokies' various ecological communities are most often identified by forest types called life zones. Elevation, soil conditions, moisture or dryness, and exposure to wind and sun all play roles in determining the location of life zones. Botanists usually identify the forests by the kinds of trees that predominate.
Cove hardwood forest
Below 4,500 feet, deciduous trees cover sheltered slopes and extend into low-elevation coves and valleys. Trees of record or near record size are common. Typical trees include yellow buckeye, basswood, yellow poplar, mountain silverbell, white ash, sugar maple, yellow birch, and black cherry. Rhododendrons and lady's slippers are common flowering plants. You can see cove hardwood forests on the Cove Hardwood Nature Trail at the Chimney Tops picnic area, Alright Grove near the Cosby entrance, and along the Ramsay Cascades and Porters Flat trails near the Greenbrier entrance.
Pine and Oak Forest
Oak and pine trees predominate to about 3,000 feet on slopes and ridges that are dry compared to other parts of the park. Other trees include hickories, yellow poplar, and flowering dogwood. This kind of forest also contains thickets of mountain laurel and rhododendrons. You will find pine and oak forests around Cades Cove and the Laurel Falls Nature Trail.
Eastern hemlock forests grow along streams and on slopes and ridges up to about 5,000 feet. Maple, birch, cherry, and yellow poplar trees are also found here. Rosebay rhododendrons proliferate along streams, while Catawba rhododendrons survive in heath balds and on exposed ridge tops. Hemlock forests are located along trails from Roaring Fork Motor Nature Trail toward Grotto Falls, and Newfound Gap Road to Alum Cave Bluffs.
Northern hardwood forest
Yellow birch and American beech dominate this forest, occurring mostly above 4,500 feet. Maple, buckeye, and cherry trees are also in the mix. Shrubs include Catawba and rosebay rhododendrons, hydrangea, thornless blackberry, and hobblebush. Many flowering plants grow here: creeping bluets, trilliums, long-spurred violets, and trout lily. You can see northern hardwood forests at Newfound Gap and along Clingmans Dome Road.
Above 4,500 feet, you'll find red spruce and the few remaining Fraser firs (90 percent succumbed to an insect infestation). The many coniferous trees may remind hikers of Maine or Quebec. Above 6,000 feet, yellow birch, pin cherry, American mountain ash, and mountain maple occasionally appear. Plants here include dingleberry, blackberries, blueberries, Carolina and Catawba rhododendrons, and ferns such as hay-scented, lady, and common polypody. Spruce-fir forests grow along the Appalachian Trail and the Spruce-Fir Nature Trail along Clingmans Dome Road.
A total of 65 mammals live in the Park. Some, such as the coyote and bobcat are reclusive while deer are very common and obvious. Besides deer people most often see red and gray squirrels, chipmunks, woodchucks, raccoons, opossums, red and gray foxes, skunks, and bats.
Deer are common throughout the Park. An exotic, the wild European boar, causes widespread damage. Like other intrusive exotic species, the Park seeks means to control the boar population. Mammals native to the area, but no longer living here include, bison, elk, gray wolves, and fishers. Reintroduction efforts brought back the red wolf and river otter.
Reptiles and Amphibians
The Park has been designated as an International Biosphere Reserve and has an international reputation for its variety and number of salamanders. The Smokies' 27 species of salamanders make them the salamander capital of the world. Notable species include Jordans Salamander, one subspecies of which is found only in the Smokies, and the Hellbender, which can grow up to a whopping two and one-half feet long. Other amphibians such as frogs and toads thrive in the Great Smokies.
Reptiles include snakes, turtles and lizards. The only two poisonous species are the timber rattlesnake and northern copperhead. Neither have a lethal poison, and death from snakebite in the Smokies is extremely rare. Other common reptiles include the eastern box turtle, common snapping turtle, and southeastern five-lined skink.
A favorite resident of the Smokies is the black bear. Presently over 700 black bears live in the Smokies. These wild creatures feast on the many berries, nuts, insect larvae and animal carrion in the mountains. The park is actually one o f the few remaining areas in the eastern United States where black bears can live in wild, natural surroundings.
The Smokies Mountains are famous the black bear population. Bear sightings usually begin in early March, but weather conditions can delay this. Newborns and mothers remain denned until May. Cubs remain with their mothers for a year and a half.
Park officials warn visitors that tamed bears lose their natural fear of people, and that violent bears must be destroyed. Also, bears that become overly aggressive are moved into the backcountry, which is open to hunting. Tame bears make easy targets for hunters.
Although there is no one best place to see bears in the Park, Cades Cove and the Roaring Fork Motor Nature Trail are among the best spots to look. Bears are most active early in the morning and late in the evening.
On the small chance of encountering an aggressive black bear the best action is make a lot of noise (a whistle works well), and slowly retreat. Only when between a mother and her cubs, or when dealing with a hungry, human-fed bears are they dangerous. Bears are excellent climbers, so climbing a tree is ineffective. Playing dead does not work either, since dead animals are part of the black bears' diet. However, few dangerous bear situations occur.
With so much to do, it's hard to choose, so use these pages as a guide. With over 800 miles of trails and more than 100 back county campsites, rafters, horseback riders, bird watchers and even those just taking a stroll use the extensive trail system to view the wonder of the Smokies up close. Trails are available for hikers with varying levels of experience, mountain bike, climb rocks, and view wildlife and waterfalls. The rolling hills and fertile valleys offer over changing views in Tennessee's Smoky Mountain region. The surrounding mountain ranges, with peaks rising higher than 6,000 feet. The landscapes gleam with expansive lakes, cool creek beds, flowing mountain streams and tumbling rivers. One of the Nations most famous trials, the 2,100 -mile Appalachian Trail, stretches 70 miles along the crest line of the Smokies. In addition to the trails, the park has 77 historic buildings and 151 cemeteries, which are preserved to remember the human history of the park.
Just a short drive away, the sounds of music, rides and laughter mingle in the family resort towns of Gatlinburg, Pigeon Forge, Sevierville and Townsend. The region is also home to the bustling cities of Knoxville, Maryville and Oak Ridge. Surrounded on three sides by the Great Smoky Mountains National Park.
Located along the mountainous State border between Tennessee and North Carolina, the 514,885 acres of Great Smoky Mountains National Park include 477,670 acres recommended for inclusion into the National Wilderness System.
Several major highways lead to the Park. The following routes provide access to the three main entrances.
In North Carolina
To Park : the nearest major airport in Tennessee (McGhee-Tyson, TYS) is Alcoa, 45 miles west of Gatlinburg. North Carolina's, Asheville Airport is 60 miles east of the park. No train or bus service accesses the Park.
In Park : personal vehicle, limited trolley service from Gatlinburg.
OPERATING HOURS AND SEASONS
The park is open year-round. Visitor centers at Sugarlands and Oconaluftee are open all year, except Christmas Day. Cades Cove Visitor Center has limited winter hours.
CLIMATE AND RECOMMENDED CLOTHING
Elevations in the park range from 800 feet to 6,643 feet and topography affects local weather. Temperatures are 10 to 20 degrees cooler on the mountaintops. Annual precipitation averages 65 inches in the lowlands to 88 inches in the high country. Spring often brings unpredictable weather, particularly in higher elevations. Summer is hot and humid, but more pleasant in higher elevations. Fall has warm days and cool nights and is the driest period, and frosts occur starting in late September. Winter is generally moderate, but extreme conditions occur with increasing elevation.
Sugarlands Visitor Center , near Gatlinburg, TN, is open year-round and offers nature exhibits, a short film, guidebooks, maps, and park rangers who give lectures, guided strolls, and answer questions. Pick up your camping, hiking, or fishing permits here.
Oconaluftee Visitor Center , near Cherokee, NC, is also open year-round and its exhibits focus on mountain life of the late 1800s. Adjacent to the visitor center is the Mountain Farm Museum, a collection of historic farm buildings. Cades Cove Visitor Center, near Townsend, TN, (closed in winter), sits among preserved historic buildings representing isolated farming communities of the 1800s.
Trails and Roads - More than 800 miles of trails provide opportunities ranging from ten-minute saunters on quiet walkways to weeklong adventures deep in the forest. There are about 170 miles of paved roads and over 100 miles of gravel roads. The "backroads" offer a chance to escape traffic and enjoy the more remote areas of the park.
During the summer and fall, the park provides regularly scheduled ranger-led interpretive walks and talks, slide presentations, and campfire programs at campgrounds and visitor centers.
LeConte Lodge , when night descends, you can wrap the silence around you like a cloak at LeConte Lodge. The only noises disturbing the stillness are the sounds of nature...nocturnal creatures going their way, the rumble of thunder or breezes rustling through treetops. Accessible only by foot or horseback sits atop 6,593 Mt. LeConte, the Park's third highest peak.
Mt. LeConte Lodge is a rustic set of cabins and lodges located at the top of Mt. LeConte in the Great Smoky Mountains National Park in eastern Tennessee.
Mt. LeConte is probably the most impressive peak in the Smoky Mountain National Park. While Clingmans Dome and Mt. Guyot are higher, they are both parts of high ridges, while Mt. LeConte seems to tower over its surroundings. It is the most prominent peak when approaching the park from the Tennessee side on Rt. 441.
The Lodge is only accessible by hiking one of five trails. The shortest and steepest is 5 1/2 miles long. The reservation includes dinner, a bed, breakfast, and a great view. There is no running water or electricity in the cabins, and all food is brought up the mountain by llamas. As rough as this sounds, it is VERY difficult to get a reservation. It's a fantastic experience
Although the summit of LeConte is tree-covered and has no views, impressive views are available at Cliff Tops, and Myrtle Point on the other side of the summit. The summit can be reached via numerous trails, including Alum Caves Trail (4.5 miles), Rainbow Falls Trail (6.5 miles), Bullhead Trail (6.5 miles), Trillium Gap Trail (7 miles) and the Boulevard (8 miles). The Rainbow Falls - Bullhead combination makes one of the park's best loops.
Reservations are required and can be made by calling 429-5704. The lodge is open mid-March to mid-November. A variety of lodging facilities are available in the outlying communities.
Frontcountry Campgrounds : The National Park Services maintains developed campgrounds at ten locations in the park. Great Smoky Mountain camping is primitive be design. Besides sites nestled in the woods and along the rivers, all campgrounds provide running water and flush toilets. No hook-ups are available in the Park. Pets must be restrained at all times, and are not permitted on hiking trails.
Ten campgrounds operate in the Park. Most are open from early spring through the first weekend in November. Cades Cove in Tennessee and Smokemont in North Carolina are open year round. Sites at Cades Cove, Elkmont, and Smokemont may be reserved for the period may 15 to October 31 through the National Park Service at 800-365-CAMP or on line at http://reservations.nps.gov. The other campgrounds are generally open from late March April to early November. Camping fees range from $10.00 to $15.00 per night. For more information on all types of camping call (865) 436-1200.
Backcountry Campsites: Backcountry camping is free but requires a permit. Whether your planned hike is long or short, it is always a good idea to have on dependable hiking boots, wear multiple layers, and carrier rain gear. The temperatures are cooler in the trees, especially higher up. The higher the elevations also see more precipitation than the lower ones. Bring along drinking water, as the streams are not drinkable. Most campsites use self-registration at visitor centers or ranger stations, but shelters and rationed sites require reservations. Reservations can be made 30 days in advance by calling (865) 436-1231, 8:00 a.m.- 6:00 p.m. daily.
There are no food facilities in the park. Numerous convenience stores and restaurants establishments are located in outlying communities.
Horse rentals are available in season at five horse stables in the park in Tennessee and North Carolina.
Wheelchair accessible facilities, including restrooms, are located at the three major campgrounds, Cades Cove and Elkmont in Tennessee and Smokemont in North Carolina, visitor centers, and many picnic areas. Campsite reservations can be made for the period May 15 to October 31 by calling Destinet at 1(800) 365-CAMP. A five-foot wide paved and level accessibility trail, Sugarlands Valley Nature Trail, is a quarter mile south of Sugarlands Visitor Center. Specially designed communications media, including tactile and wayside exhibits, large print brochures and a cassette version are part of the trail.
Twelve self-guided nature trails ranging in length from 1/4 mile to a mile roundtrip were selected and developed by Park naturalists for their interesting natural history, beauty, and accessibility. The new All-Access Nature Trail, 0.5 miles south of Sugarlands on Newfound Gap Road, was especially designed for the handicapped, parents with young children and older couples.
RECOMMENDED ACTIVITIES/PARK USE
Camping, hiking, picnicking, sightseeing, fishing, auto touring, horseback riding, nature viewing, and photographic opportunities abound.
BASIC VISIT RECOMMENDATIONS
Plan your visit to the park by stopping at one of the visitor centers or writing ahead to obtain information. Also be sure to acquire safety information/tips pertaining to your planned activity, especially if you are not familiar with the area.
SPECIAL EVENTS AND PROGRAMS
The park holds a variety of annual events, including Old Timers' Day, storytelling, a quilt show, Women's Work, Mountain Life Festival, sorghum molasses and apple butter making, as well as living history demonstrations.
In winter during hazardous weather conditions, the two main roads will close. Do not leave valuables in your car. Adhere to Park rules and regulations.
Auto Touring the Great Smoky Mountains National Park encompasses over 800 square miles and is one of the most pristine natural areas in the East. An Auto tour of the park offers a variety of experiences, including panoramic views, tumbling mountain streams, weathered historic buildings, and mature hardwood forests stretching the horizon.
The roads are designed for scenic driving. There are numerous turnouts and parking areas at viewpoints or historic sites. Traffic, winding roads, and the scenery conspire to making driving time more important than distance here in the park. Figure about twice the time to drive a given distance that you would for normal highways. Be on the alert for unexpected driving behavior from others--they may be under the influence of the scenery! Gasoline is not sold in the park, so check your gauge. Remember that winter storms may close the Newfound Gap and Little River Roads.
Begin at the Sugarlands Visitors Center on Route 441 at the Gatlinburg entrance to the Park. The most popular drive through the park is Newfound Gap Road, 26 miles long, crossing the park entrance to the southwest. It begins at the Sugarlands at an elevation of 1,436 feet then rises to more than 5,000 feet above sea level at Newfound Gap. The road descends down 3,000 feet to Oconaluftee Visitors Center at the main entrance to the park from North Carolina.
The main road in the park is the Newfound Gap Road (U.S. 441) between Gatlinburg and Cherokee. It is the only road across the mountains. Along it and at the Newfound Gap Parking Area you will get some of the best scenic high mountain vistas in the park - and on the East Coast, for that matter.
In southern Appalachian vernacular, a "gap" is a low point along a ridge or mountain range. The old road over the Smoky Mountains crossed at Indian Gap located about 1-1/2 miles west of the current site. When the lower, easier crossing was discovered, it became known as the "Newfound Gap."
There are scenic overlooks along the way, roadside exhibits, and trailheads for the hikers. At Newfound Gap you can see for miles. The Appalachian Trial crosses the road here. There is also the memorial where Franklin D. Roosevelt stood to dedicate the national park in 1940.
The most popular stop is Clingmans Dome , accessible by a 7-mile side road. At 6,643 feet above sea level, Clingsman Dome is the highest point in the Smokies. One can drive almost to the top and then hike the last half-mile to the overlook tower. Take it slow because the high altitude means the air is thinner, but the fantastic, panoramic view is worth the effort. Clingmans Dome Road is a dead end spur off the Newfound Gap Road at the crest of the Smokies.
Clingmans Dome is a popular Park destination. Located along the state-line ridge, it is half in North Carolina and half in Tennessee. The peak is accessible after driving Clingmans Dome Road from Newfound Gap, and then walking a steep half-mile trail. A paved trail leads to a 54-foot observation tower. The Appalachian Trail crosses Clingmans Dome, marking the highest point along its 2,144-mile journey.
Vistas from Clingmans Dome are spectacular. On clear, pollution-free days, views expand over 100 miles and into seven states. However, air pollution limits average viewing distances to 22 miles. Despite this handicap, breathtaking scenes delight those ascending the tower. It is a great place for sunrises and sunsets.
Cloudy days, precipitation, and cold temperatures reveal the hostile environment atop Clingmans Dome. Proper preparation is essential for a good visit. Weather conditions atop Clingmans Dome change quickly. Snow can fall from anytime between September and May. Get a current weather forecast before heading to the tower. The cool, wet conditions on Clingmans Dome's summit make it a coniferous rainforest. Unfortunately, pests, disease, and environmental degradation threaten the unique and fragile spruce-fir forest. Dead trunks litter the area, and dying trees struggle to survive another year. Berries thrive in the open areas, and a young forest will replace the dying trees.
Although Clingmans Dome is open year-round, the road leading to it is closed from December 1 through April 1, and whenever weather conditions require. People can hike and cross-country ski on the road during the winter.
Other motor trails exist; the most famous is the Cades Cove Loop, which as also a historical tour of those who settled in the valley. Northeast of Gatlinburg off RTE. 321 is the Roaring Fork Motor Nature Tail Loop which takes you up the western flank of Mt. LeConte. This paved, narrow, winding jewel of a road fords streams and cuts across a deep gorge.
Roaring Fork Nature Trail
The trip begins on Cherokee Orchard Road. In the 1920s and 30s, this area was once 796-acre commercial orchard and nursery with over 6,000 fruit trees. A short three miles later stands Noah "Bud" Ogle's Place, located at the end of the end Cherokee Orchard and the beginning of the one-way motor loop. Takes you on a 5-mile winding drive through forest and past pioneer structures.
The Roaring Fork Motor Nature Trail is an intimate journeys though the Smoky Mountain's lush mountain wilderness. In places it reveals some of nature's secrets, while in others it weaves the story of the people who once lived here. Water is a constant companion on this journey. Cascades, rapids, and falls adorn the roadside. The sound of rushing water is never far away. The air feels damp and tropical throughout the summer months, yet the icy water rarely reaches 60F degrees.
The Roaring Fork Motor Nature Trail is open to vehicle traffic from early spring until December 1 each year.
Blue Ridge Parkway
If you want to sample the Blue Ridge Parkway and also enjoy some beautiful mountain scenery, try the Balsam Mountain Road, which leaves the parkway between Oconaluftee and Soco Gap. It winds for 14.5 kilometers (9 miles) back into the national park's Balsam Mountain Campground. Incredible azalea displays will dazzle you, when in season. If you are adventurous and want to try a mountain dirt road, continue past the campground to the Heintooga Picnic Area and the start of the Round Bottom Road (closed during winter). This is a 22.5-kilometer (14-mile), partially one-way, unpaved road that descends the mountain to the river valley below and joins the Big Cove Road in the Cherokee Indian Reservation. You come out right below Oconaluftee at the edge of the park.
Little River Road
Another view of the Smokies awaits you along the Little River Road leading from Sugarlands to Cades Cove. The road lies on the old logging railroad bed for a distance along the Little River. (The curves suggest these were not fast trains!) Spur roads lead off to Elkmont and Tremont deeper in the park, and to Townsend and Wear Cove, towns outside the park. Little River Road becomes the Laurel Creek Road and takes you into Cades Cove where you can take the one-way 18-kilometer (1l-mile) loop drive and observe the historic mountain setting of early settlers. If you are returning to Gatlinburg or Pigeon Forge from Cades Cove, try exiting the park toward Townsend and driving the beautiful Wear Cove Road back to U.S. 441 at the north end of Pigeon Forge.
Perhaps the most bucolic scenes in the Smokies are to be seen from the Foothills Parkway between Interstate 40 and Route 32 near Cosby, around the northeast tip of the park. Here you look out across beautiful farmland with the whole mass of the Smokies rising as its backdrop.
Only three sections are currently open to vehicle traffic. Completed sections of the Foothills Parkway are open year-round, weather permitting. Uncompleted sections are open to pedestrians, bicyclists, and equestrians.
At the west end of the park there is another section of the Foothills Parkway between Chilhowee and Walland. This 20-mile section is the Foothills Parkway's longest segment The parkway is administered by the National Park Service. It provides beautiful vistas of the northwestern Smokies, including Thunderhead Mountain, highest peak in the Park's western half. Many of its south facing overlooks peers over Happy Valley, into the Smokies, and beyond. Its north facing views oversee Maryville, Knoxville, and the Great Valley.
The Foothills Parkway skirts the Great Smoky Mountain National Park's northern side. Only three sections are currently open to vehicle traffic. Due to funding and legislative difficulties, the ultimate status of the parkway remains uncertain. Despite political disappointments, the Foothills Parkway's open sections provide beautiful views of the Park and surrounding country. Completed sections of the Foothills Parkway are open year-round, weather permitting. Uncompleted sections are open to pedestrians, bicyclists, and equestrians.
West - Running southwest from Walland to Chilhowee , this 20-mile section is the Foothills Parkway's longest segment. It provides beautiful vistas of the northwestern Smokies, including Thunderhead Mountain, highest peak in the Park's western half. Many of its south facing overlooks peers over Happy Valley, into the Smokies, and beyond. Its north facing views oversee Maryville, Knoxville, and the Great Valley.
Halfway along the segment, a trail leads to the Look Rock Tower (pictured). It is a third of a mile from the road. The trail makes a moderate climb. The tower provides a 360-degree panorama and a platform for scientific research such as air quality. Sunsets from the tower are often spectacular.
East - Foothills Parkway east is a six-mile road leading from Cosby, TN to Interstate 40 . Its eastern terminus is TN exit 443. Built on Green Mountain, the road provides wonderful views of Cosby Valley to the south and the Newport area to the north.
Other interesting drives in the park are the Rich Mountain Road, Parsons Branch Road (both closed in winter), and the Roaring Fork Motor Nature Trail. It forms an 11-mile loop along with Cherokee Orchard Road. The one-way road runs for 8 miles. It is not suitable for bicycles, RVs, trailers, or buses. Cherokee Orchard Road is a two-way road without these restrictions and leads to the Rainbow Falls parking area. Airport Road in Gatlinburg turns into the Park's Cherokee Orchard Road.
The Spur - Technically part of the Foothills Parkway , The Spur is the only direct route from Gatlinburg to Pigeon Forge. A scenic four-lane highway, it follows the West Prong of the Little Pigeon River.
Many Park roads have only a gravel surface. Two wheel drive vehicles can drive these roads. Some provide access to less visited park areas while other are scenic drives in their own right. Below are description of the three main gravel roads. All of them are one way
Cataloochee Valley nestles among the most rugged peaks in the southeastern United States. Surrounded by 6,000-foot mountains, this isolated valley was the largest and most prosperous settlement in what is now the Park. Once known for its farms and orchards, today's Cataloochee is one of the Smokies' most picturesque areas. Few people visit this beautiful valley, but spectacular rewards await those who do.
Along with preserved houses, churches, and farm buildings, Cataloochee offers extraordinary views of the surrounding mountains. It is also known for its dense wildlife populations.
Cataloochee is open year round. Access is via a long and winding gravel road from Hartford, TN or by Cove Creek Road (mostly gravel) near Dellwood, North Carolina. A paved road runs though Cataloochee Valley. RVs up to 32 feet can stay at the campground. .
Heintooga -Roundbottom Road
Heintooga-Roundbottom Road is a 15-mile road leading from Balsam Mountain Road to Big Cove Road. It takes 1 hour to drive. The only access to the area is along the Blue Ridge Parkway. Starting from a mile high, this road descends through the Raven Fork drainage basin. A few small vistas open along exposed ledges. The road travels through lush second growth forest and along cascading streams. Heintooga-Roundbottom Road is an opportunity to experience the Great Smokies solitude and wilderness. Following Raven Fork's playful waters, the road leads into Cherokee, NC along Big Cove Road.
Rich Mountain Road
Rich Mountain Road heads north from Cades Cove over Rich Mountain to Tuckalechee Cove and Townsend, TN. The 8-mile road provides beautiful views of Cades Cove. Many prize-winning photographs come from here. Situated on a dry ridge, an oak-dominated forest lines the roadside. Once outside the Park, the road becomes steep and winding.
Parsons Branch Road
Parsons Branch Road leads from Cades Cove southwest to US Route 129 near Deals Gap. Virgin Oak Forest lines this historic route. At present the road is open only to hikers and equestrians. Floods washed away stream crossings in spring 1994. Recent funding will allow the necessary repairs to begin, and the Parks plans to open the road in early 1998.
Cades Cove is a historic district within the Great Smoky Mountains National Park. Located near Townsend, Tennessee, this beautiful area receives 2 million visitors each year. It is the most crowded Park destination. Cades Cove is a look into the past. Man became part of Cades Cove beyond reach of human memory. Indians hunted here for uncounted centuries, but hardly any sign of them remains. White settlers followed the Indians to the Cove, and their sign is everywhere, buildings, roads, apple trees, fences, daffodils and footpaths. Cades Cove is an open-air museum that preserves some of the material culture of those who last lived there. Preserved homes, churches, and a working mill highlight the 11-mile loop road. Wildlife abounds around the cove and sightings of deer, foxes, wild turkeys, coyotes, woodchucks, raccoons, bears, and red wolves occur. Beautiful mountain vistas climb from the valley floor to the sky. Situated in a limestone window, the result of earthquake activity and erosion, Cades Cove provides fertile habitat. Settlers first came to the cove in 1819, and farmed this land until the Park formed in the 1930s.
Settlers first entered the Cove legally after an Indian treaty transferred the land to the State of Tennessee in 1819. Year after year they funneled through the gaps, driven by whatever haunted them behind or drew them in front, until they spilled over the floor and up the slopes. Most of them traced their way down the migration route from Virginia into east Tennessee (now more or less Interstate 81). Tuckaleechee (modern Townsend) was the last point of supply before the leap into Cades Cove. A few years' later pioneers moved directly over the mountains from North Carolina. They all came equipped with personal belongings, and the tools and skills of an Old World culture, enriched with what they learned from the Indians.
The people of the Cove did not enter, settle and become shut off from the rest of humanity. They were not discovered by Park developers, still living a pioneer lifestyle. From the beginning they kept up through the newspapers, regular mail service, circuit-riding preachers, and buying and selling trips to Tuckaleechee, Maryville and Knoxville. They went to wars and war came to them. They attended church and school, and college if financially able. A resident physician was here most of the time from the 1830s on.
Although remote and arduous, life here was little different from rural life anywhere in eastern America in the nineteenth century.
Household and farm labor was done according to one's age and sex. Men produced shelter, food, fuel and raw materials for clothing. Women cooked, kept house and processed things the husband produced. Children and the elderly took care of miscellaneous loose ends when and where they could. In this way the home was an almost self-contained economic unit. The community was an important aspect of life to the settlers in a rural society. It was an extension of the household by marriage, custom, and economic necessity . . . a partnership of households in association with each other. The community was democratic in a general sense, there were few extremes of wealth and poverty; there was widespread participation in community affairs; and, no clearly defined social classes locked people in or out. There were common celebrations like family gatherings, "workings," and funerals. Politics was tied to state, regional and national affairs. Law enforcement was personal in many ways. Justices of the Peace applied common sense, based on common law.
In 1820 this was frontier country, newly acquired by the State of Tennessee from the Cherokee Indians. Families did not simply wander in and say to them selves, "My, how pretty, let's settle here." The land was owned by speculators who bought it from the state. Settlers bought it from the speculators, whose intent was to make money. In this way Cades Cove became a typical cumulative community . . . a miscellaneous collection of people who were not oriented toward a common purpose, as in the early religious settlements of New England. It grew without a fixed plan and families chose lands that were available and affordable whenever they arrived. Most of the people came from established communities in upper east Tennessee, southwestern Virginia, and western North Carolina. Very few were "fresh off the boat."
By 1850 the population peaked at 685. With the soil growing tired and new states opening in the West, many families moved out in search of more fertile frontiers. By 1860 only 269 people remained. Slowly, human numbers rose again to about 500 just before the Park was established in the late 1920s.
Beginning a new life here was basically the same for everyone. The East End of the Cove was settled first, being higher and drier than the swampy lower end. Huge trees were cleared by girdling them with an axe. The first crops were planted among the soon-dead timber. After a few years the standing trees were cut down, rolled into piles and burned. Orchards and permanent fields followed quickly on the "new ground." Common sense told farmers to reserve the flat land for corn, wheat, oats and rye. Their homes circled the central basin, and pastures and wood lots hung on the slopes. Apples, peaches, beans, peas and potatoes were supplemented with wild greens and berries. Meat was varied and plentiful. Cattle grazed in summer on the balds (grassy meadows "bald" of trees) high above the Cove, white deer, bear, wild turkey and domestic hogs ranged the woods.
Cades Cove contains more pioneer structures than any other location in the park. Before the park was established, the area was extensively cultivated. Today, farming is still permitted there to help maintain the historical scene. Pastures, cattle, and hay combine with old buildings and open vistas to give the cove a pleasing rural aspect.
The homes of John Oliver, Carter Shields, Henry Whitehead and Dan Lawson dot the valley floor and represent a variety of building techniques. The Whitehead home is made from logs sawed square at a near by mill. Dan Lawson's home features an unusual chimney made of brick fired on the spot. Other buildings include a smithy, smokehouse, corncribs, and a cantilevered barn.
Three of five original churches remain in Cades Cove today. The oldest among them is the Primitive Baptist Church, built in 1827. These churches and the surrounding cemeteries provide fascination insight into the lives and times if the 19th century. The Baptist Church was forced to close during the height of the Civil War because of its Union sympathies.
John P. Cable's 19th century farm once a self-contained world, today the farm illustrates the daily lives of early settlers. The farm centerpiece is the 1868 mill that still grinds corn raised in the Cove.
Exhibits explain the history of many structures, self-guiding trails interpret the natural scene, and park personnel demonstrate pioneer activities at the Cable Mill on a seasonal basis. Deer and turkey are found in the cove and woodchucks (groundhogs) are often seen near the road.
Cades Cove's main auto touring route is the 11-mile loop road tracing its fringe. The loop takes from 1 to 1.5 hours to drive. Traffic is often bumper to bumper, especially in summer months and October. Throughout the summer, the road is closed to motorized vehicles on Wednesdays and Saturdays until 10am. Bicycle rental is available. Other opportunities to explore the area include walking, hiking, hayrides, horseback riding, and fishing. Rich Mountain Road, a gravel road suitable for 2-wheel drive vehicles, offers a unique perspective of the cove - and a way to escape the traffic.
If you like to tour Cades Cove a more leisurely pace, bicycles may be rented (April through September) at the Cades Cove Bike Shop. On Saturdays and Wednesdays, starting in May and ending in September, loop road is closed to autos and open to bicycles only from sunrise until 10:00 a.m. For more information call (865) 448-9034
Horses are available for rent in the Cades Cove Riding Stables. The horse back tour is a guided tour along the cool, wooded trails of the mountains, over small streams and up to vistas of trees and wildflowers. The stables are open seasonally from end of March to the first of November. For more information call (865) 448-6286
A hayride is a unique and fun way to see Cades Cove from April through October. Hayrides last one and half-hours leaving from the Cades Cove riding Stables and are available daily. Groups of fifteen or more may reserve a wagon for day trips as early as 10:00 a.m. For more information call (865) 448-6286
Picnicking-for those who enjoy the occasional meal outdoors, Cades Cove is equipped with a picnic area near the campground. Grills and tables area provided, or you may pack a lunch and eat along a trial in the area.
Hiking is a major attraction in the Great Smoky Mountains, and there are more than 900 miles of trails from quite walkways to strenuous climbs by thundering streams and waterfalls.
The hiker should be prepared for a wide range of temperatures and conditions. The temperature on some hikes can be 10 degrees cooler than when you left the lower elevation. Combine this with the fact that the Smokies are also the wettest place in the South, and you have the possibility for great discomfort in the event of a sudden storm. The higher elevations in the park can receive upwards of 90 inches of precipitation a year.
Don't judge the complete day by the morning sky. In summer the days usually start out clear, but as the day heats up, clouds can build up, resulting in a heavy shower. Winter is a great time to be in the Smokies, but also represents the most challenging time as well. Frontal systems sweep through the region, with alternately cloudy and sunny days, though cloudy days are most frequent in winter. When traveling in the Smokies, it's a good idea to carry clothes for all weather conditions.
Hikers Should Be Prepared For All Conditions
Footwear should be a major concern. Though tennis shoes may be generally appropriate for some day-hikes, boots should be worn on the uneven trails in the Park. They support the ankles from sprains and the foot from cuts and abrasions.
Stay on the designated trail. Most hikers get lost when they leave the path. If you get temporarily lost, try to retrace your steps until you cross the trail again. Then it's just a matter of guessing which way you were headed when you left the trail. You will either continue the way you were headed or go back to your starting point--either way, no harm is done.
Always bring rain gear and a wool sweater. They don't weigh much and might make the difference between being miserable or not in the event it rains. As mentioned earlier, the Smokies get approximately 90 inches of rain a year. This is good. Its what makes the Smokies such a wonderful place to be. Don't start a hike if thunderstorms threaten--some of the most devastating damage ever to the Park has been from great storms in years past.
Cross-streams carefully. Getting wet, even in summer, could lead to hypothermia, which leads ultimately to disorientation, poor decision making and, in extreme circumstances, death. Having said that, don't let a fear of hypothermia, getting lost, or bears prevent you from the enjoyment to be had by trekking the trails of the Park.
There is no record of anyone ever being killed by a bear in the Smokies. When we questioned a Park Ranger about how to react to meeting a bear on the trail, he smilingly told us the most likely sighting of a bear will be its tail disappearing over a ridge. Most "incidents" occur when an ignorant visitor feeds or otherwise harasses a bear.
To avoid crowds, hike during the week; avoid holidays; go during the "off" season. Also, go in the morning before most folks are through eating breakfast; this is a good time to see wildlife and morning light is great for photography! You can also avoid crowds by using the outlying trailheads such as those found at the Cosby and Wears Valley entrances. I'm embarrassed to say we didn't know these existed for our first 18 visits to the Smokies. But to our delight, we found new vistas, trails, and landscapes to discover for the first time.
Plan Your Hiking Trip with Care
With a little care and planning, your hiking trip to the Smokies can be much more rewarding and repay you with more great memories. You can enjoy not only the visual splendor of the Park, you can view it without counting out-of-state license plates, and you can get more fit in the bargain.
Waterfalls of the Smokies
Waterfalls adorn most every stream in the Smokies. Only one waterfall, Meigs Falls, is visible from the road. It is 12.9 miles west of the Sugarlands Visitor Center, near the Townsend Wye. All others require hiking, and range from easy to strenuous. Below is a listing of the Smokies best known falls:
Laurel Falls is one of the most popular in the park, because the falls is spectacular! Laurel Falls is the easiest waterfall hike on the Tennessee side of the park. It is 2.5 miles roundtrip, and follows a paved trail. The trail cuts through the middle of a series of cascades. Laurel Falls is 60 feet high. Laurel Falls passes through a pine-oak forest. The mountain laurel, which is abundant along this trail, blooms in mid May. The trail crosses through Laurel Branch at the base of the upper cascade of the falls. The fall is divided in the middle by the trail and a pleasant pool.
The trailhead is located on Little River Road to Fighting Creek Gap between Sugarlands Visitors Center and Elkmont Campground.
Grotto Falls is off the Roaring Fork Motor Nature Trail. It is 2.4 miles roundtrip though a hemlock dominated forest. This easy trail, through a hemlock forest, crosses three small streams and leads behind the falls. The cool moist environment at the falls is perfect for salamanders and summer hikers. Grotto Falls is distinctive as the only waterfall in the park. The walk is easy, the way broad and the falls peaceful and refreshing. Though often crowded, Grotto Falls is an excellent stop over en route to Mount LeConte. Large boulders and fallen trees offer plenty of seating. The falls itself is fifteen, maybe twenty feet high, usually having a good amount of water coming over. The trail takes you right behind the falls! This is an exciting initiation into the 'wilderness experience' for many. Wet rocks are extremely slippery, do be very careful. The scenery here is hardwood bottomland, somewhat rocky and rooty, but quite negotiable for even the out of shape hiker. Trees are often fair to large in size, especially the hemlocks.
Beyond the falls, the trail becomes notably steeper as you wind your way up two more miles to Trillium Gap amid the folded shoulders of Mount LeConte. The scenery changes slightly here, as the trees are sparser and substantially younger. In season (around late April), wildflowers here are unusually profuse and lovely. Occasional panoramic glimpses through the trees will also entice you on. Aside from the flowers, Trillium Gap is hardly a scenic wonder, just a wooded saddle between Mount LeConte and Brushy Mountain, more of a crossroads than a destination: Porter's Flat is to the east, Brushy Mountain to the north, LeConte to the south and Grotto Falls to the west. I insist that any trip past Trillium Gap be accompanied by a one-mile jaunt (round trip) up Brushy Mountain. The signs at the gap point the way north on a rocky trail through blueberry, heath and rhododendron bramble. This view is one the best on this side of the Smokies, especially if LeConte is clouded over. (At only 4900 feet, it is far less often afflicted by fog.)
The flora changes little for about a mile past the gap. However, swags and northern slopes are notably more coniferous. Beyond this, the trees thin out further and the upper elevations offer the familiar Frazer Fir and heath scrub terrain (see Mount LeConte). It is about three and a half miles from the Gap to the lodge atop LeConte. At a mile or so past the gap, you will look up and behold the looming silhouette of LeConte, 1000 feet above you and 2000 feet dead ahead. The last mile and a half of this trail are quite long. Not too scenic except when flowering sometime in late May. Press onward! Eventually you will see the wooded apex of High Top just ahead of you.
Indian Creek Falls is a 1.5-mile roundtrip hike out of the Deep Creek Area. Sliding down 35 feet of sloping rock strata, the water livens and cools the air. This is an old road trail paralleling Deep Creek. It provides an easy grade and a good walking surface. There are pines, oaks, rhododendron and hemlock, with wildflowers in the wetter places. Along the route is Toms Branch Falls , another a beautiful fall.
Henwallow Falls is near Cosby Campground, south of Cosby, Tennessee. It is a 4.4-mile roundtrip along a moderate trail. This 45-foot fall receives fewer visitations than any other area falls. This makes for a pleasant walk through a hemlock, polar and rhododendron forest to the top of the falls. A side trail leads to the base of the falls in a series of switchbacks. Hen Wallow Creek, only two feet wide at the top of the falls, fans out to a width of twenty feet at the base. This is an easy trail except for a few hundred feet before the falls, which is almost easy.
Abrams Falls is a 5-mile roundtrip hike. The trail begins in the back of Cades Cove loop road and is a moderate hike. Abrams Falls has the largest water volume of any park fall, and is among the most photogenic. The trail to the falls changes from pine-oak on the ridges to hemlock-rhododendron forest along Abrams Creek. Due to the undertow, swimming in the pool at the base of the falls is very dangerous.
Ramsay Cascades is a strenuous 8-mile roundtrip hike. The highest waterfall in the Park. Eastward lies the Ramsey Cascade and Greenbrier Pinnacle trails. A leisurely walk along a well-graded roadbed leads to an old parking loop at about one-mile. The trail forks to the two locations. Ramsey Cascade is a famous trail of eight miles leading to a spectacular waterfall. Formally, an invigorating rock climb to the top of the falls yielded an extraordinary view westward. However, deaths and injuries on the cliffs have forced the park service to disassemble the bridge over the river leading to the trail to the top. To ban this gorgeous overlook is unfortunate, however the safety of the park visitors is indeed a paramount issue. Several huge tulip poplars hug the path about midway. The hike is long and tiring, rarely unpopulated, but well worth the effort. In winter, the waterfall is even more beautiful and less crowded. But the trail is sometimes vague under snow, so don't make a first time of it in the winter months. Local ground squirrels will mooch unceasingly in all months.
Rainbow Falls , at 80 feet, is the highest single plunge water takes in the park. It is rated between moderate and strenuous. This trail makes a good challenge and reveals a beautiful fall. Imagine hiking 2.7 miles and hearing the gentle sounds of a flowing mountain stream throughout your walk. The LeConte Creek is always within hearing distance of the trail to Rainbow Falls. It runs down from Mt. LeConte and into the Little Pigeon River in Gatlinburg. Rainbow Falls is a 75-ft. drop. In the wetter months, the waterfall is a beautiful rush of water that cascades over. There is a setting log at the base of the waterfall that provides a wonderful view.
Mingo Falls can be reached by following the Pigeon Creek trail out of Mingo Falls Campground (on the Cherokee Reservation, south of the park). A longer side trail branching off at the halfway point will take you to the top of the falls. Mingo falls has a spectacular drop of about 120 feet.
Trails of the Smokies
Mount LeConte is the terminating pinnacle of a five-mile spur off the Great Eastern Divide, the ridge separating Tennessee from North Carolina. Formerly, the mile-high wall of the Eastern Divide literally separated the 'civilized' east from Indian Territory to the west. LeConte is distinctive in its three prominent peaks, all above 6,000 feet, running almost due east west. It overshadows the small tourist town of Gatlinburg, nestled six miles northwest of and one mile beneath its crown.
Leconte's three peaks are Cliff Tops, High Top and Myrtle Point . Cliff Tops is the westward-facing peak, only a quarter mile walks from the camp. This is where the sunset is viewed. As can be gathered from the name, the rocks here cap a cliff several hundred feet high. High Top is the center and tallest point on Mount LeConte at 6,593 feet above sea level (only 50 lower than Clingmans Dome).
Myrtle Point , an eastward facing heath bald and rock outcropping. It is 0.8 miles from the cabins, a fun walk. The finest vantage points from which one may view the entire LeConte range are: Gatlinburg, Sevierville, even more so on the Chimney Tops from the south and Brushy Mountain, a quarter mile jaunt off the Trillium Gap trail, is, in my opinion, the most impressive way to view the mountain.
Also available on top of Mount LeConte is a backcountry shelter. It is basically a lean-to with a chicken wire fence over the entrance to discourage bears. Reservations are required and it is not easy to obtain them in crowded seasons - late spring and fall. Crude (and not too comfortable) bedding is found inside, although I personally prefer sleeping on the ground. No facilities are offered except a fire ring. Drinking water and pit toilets may be found the lodge, less than a quarter of a mile away.
Alum Cave Trail is five and a half miles with a vertical climb of around 2600 feet. Strangely, most claim the easiest is via Newfound Gap, an eight-mile trip, but only a vertical climb of 1200 feet.
The Alum Cave trail is well known as the shortest route to Mount LeConte (about 5.5 miles each way). The majority of people who take this beautiful hike, however, do not venture all the way to LeConte. And the vast majority of beauty along this trail is found by the time you reach the "cave." The first or second week of June is just incredible for purple rhododendron along this trail. Mountain Laurel is also waxing near this time. The very wide trail follows a quaint creek bed for the first mile or so to Arch Rock - an interesting hole through a shaly spur. The trail then heads up the base of Mount LeConte. The trail is usually crowded, but wide and enjoyable.
Clingsman Dome , the observation tower is a concrete edifice standing atop the highest point in Tennessee, 6643 feet above sea level. The tower actually straddles the Tennessee/North Carolina State line, however North Carolina boasts a taller peak, Mount Mitchell, 70 miles to the northeast. Surrounding firs encroach upon the panoramic view from the top. Photographs at all compass points detail the peaks and other pertinent landmarks visible from the top. Sunsets and sunrises are spectacular from here on clear days
The 'trail' is wide and paved, much more like a road. Benches are provided at several locations along the way. Climb is 400 feet in a half-mile. Most books list this as strenuous, for hikers it is easy to maybe moderate. Although this trip is a quick, busy view, it should not be passed by. It is the highest point in the park, the view is excellent and it is the classic Smoky Mountain photo stop
The Chimney Tops - the first half mile is thick rosebay (white) rhododendron creek bottom. In its season, for a week or two in July, these so fill the hollow that it looks to be freshly covered with a late snow. A smattering of mountain laurel and catauba (purple) rhododendron are pretty in early (maybe second week of) June, but THE time of the year is early - mid July.
Leaving the modestly hilly lowland rhododendron forest of the first mile, the trail splits left and right. A left turn ascends the Indian Gap trail. Turn right to tackle the chimneys (this way is not usually marked). Shortly ahead, the fourth wooden bridge crosses the creek for the last time. Be prepared, just ahead is the 'steep part.' Do not be deceived by any trail books speaking to the contrary, this strenuous half-mile gaining 600 feet over loose rocks. In chilly weather, the earthen embankments on the left often display curious ice formations extruded from frozen ground. A sharp bend to the left followed by a switchback right mark the end of the worst of it. There is still some uphill, but if you have made it this far, don't dare think of turning back now. Ups and downs wind for no more than a half-mile to the base of the Chimneys. Several nice views on the right make good resting spots, but no need to use film now as the view from ahead will be similar, but far better.
As you approach the ridge, be very careful of the web of tree roots. If your ankles and balance are good. The trail dead ends into what first seems to be a cliff. But upon further inspection, several people are probably climbing up it or have already climbed it.
Once to the top the view is unforgettable. Walk all the way to the end of the flat section on top for the best view. The gaping holes in the rocks are the bona fide Chimneys. Various myths circulate as to why they are so called. Mount LeConte stretches before you with the Boulevard trail running from there to the right. To the hard left is Sugarland Mountain. The loop appears tiny but is plainly visible in the forest far below. The narrow rocky catwalk ahead is to the lower Chimneys. This is a more challenging climb than the way up and offers no better view, but is not crowded and is great fun for the adventurous.
The Boulevard Trail is an eight-mile walk to Mount LeConte. Initially, follow the Appalachian Trail from Newfound Gap northeastward. The beautiful trail is wide and busy for the first mile or so. Rocks and tree roots are large and common, so be careful if you are not accustomed to walking on such things. Frequent views through the trees are spectacular year round.
Parking spaces are abundant at the gap. The trail climbs 1000 feet in 2.5 miles from the gap to the branch-off at the Boulevard Trail. Ups and downs are common, but there are much more ups than downs. Once upon the Boulevard, you will descend 400 feet down the west flank of Mount Kephart. The terrain does not vary greatly. Upper elevation conifers are mixed with various hardwoods, graced by a carpet of grass and wild flowers. The trail is well maintained and easy to follow all the way. From this point on the trail is a veritable roller coaster (though slow) of ups and downs and ups and downs along the Boulevard ridge. At about three miles from where the turn off from the A.T., you follow the crest of a steep ridge, falling away on both sides, the views are gorgeous. Even those with a fear of heights should not experience trouble, however, as the trail is plenty wide. Ahead looms massive Mount LeConte, appearing just minutes away. There yet remains scaling the oft misty and wooded peak, two miles over and 750 feet up.
The Boulevard is often claimed to be the easiest of ways to LeConte. Under no uncertain terms that this is the case. The Boulevard is a very long hard haul even without a pack. It is the most difficult, except for perhaps Porters Flat via Trillium Gap, which is longer and gains far more elevation. The combination of many uphill and downhill are more strenuous than even continual uphill.
However, the Boulevard is a "classic must do" kind of trail, and very beautiful. Just make sure you devote the entire day to enjoy it, lest weariness spoil the last few miles.
Brushy Mountain Trail to Trillium Gap is a must If you are ever at Trillium Gap DO NOT MISS BRUSHY MOUNTAIN. This bald-like peak of just over 4,900 feet is only a shy mile and a moderate hike up to one the best panoramas in the park. Clouds rarely obscure the view as at LeConte and the vistas are in all directions. Flora is scrubby brush, and rhododendron. The trail is well maintained.
Andrews Bald in the flame azalea season of July and you are in the place to be. Not only are the views magnificent, but the colors of the well-publicized azaleas equally grand. Your outing will start from the Forney Ridge (Clingmans Dome) parking area and head down the southwest slope of the ridge. A left turn at the first intersection will steer you toward Andrews Bald. Trails are well marked and well maintained. If you are unfamiliar with rocky ground, you may find the going either slow or dangerous - choose slow. The trail takes the shelter of the southwest shoulder for a mile before walking the ridge for another mile. Occasional views off to the right are only a taste of those to come.
The bald is on top of a high point (about 5,900 feet) along the ridge. So, on the ascent from the saddle (at 5,750 feet), you are on the North Slope. Accordingly, the vegetation changes to coniferous about a half mile before reaching the top. The climb up to the open field is steep and discouraging. Take heart, just as you resolve it must be further ahead than you thought, you break out suddenly into something like the opening scene of the Sound of Music, only Smokies style. The bald is a grand place for a picnic, or equally good to just sit and rest. Do look around, as there is more than enough room to find a solitary spot with a wonderful view. For those who wish to continue down Forney Ridge, the trail is hidden down the hill and to the right from where you entered. Go straight ahead, towards the view. When you get near the brush, simply follow it to the right until you find it.
Charlie's Bunion - This 1,000 foot sheer drop-off can be found four miles east along the Appalachian Trail. The cliff is named after a bunion that prevented Charlie Conner, an Oconaluftee settler, from traveling through the Gap in 1928.
An 8 mile roundtrip, rated moderate. Following the Appalachian Trail, this hike is rocky and is along the State-line ridge. It has excellent views.
Fishing is another popular Smokies recreation, there are more than 70 species of fish including rainbow trout, native brook trout, that populate our rivers and streams. A fishing license is required for ages 13-65.
Information contained herein is a summary of the fishing regulations for Great Smoky Mountains National Park. The official publication for all Park regulations is Title 36 of the Code of Federal Regulations. A copy of the Code of Federal Regulations may be found at most ranger stations and visitor centers.
Persons possessing a valid Tennessee or North Carolina state fishing license may fish all open Park waters. Licenses must be displayed on demand by authorized personnel. State trout stamps are not required.
Tennessee License Requirements
Residents and nonresidents age 13 and older need a license. The exception is residents who were 65 prior to March 1, 1990. These persons require only proof of age and Tennessee residence.
North Carolina License Requirements
Residents and nonresidents age 16 and older need a license. Residents age 70 and older may obtain a special license from the state.
Persons under 16 in North Carolina and under 13 in Tennessee are entitled to the adult daily bag and possession limits and are subject to all other regulations. The Park does not sell state fishing licenses. They may be purchased in nearby towns.
Fishing is permitted year-round in open waters.
Fishing is allowed from a half hour before official sunrise to a half hour after official sunset..
Daily Possession Limits
The possession of brook trout is prohibited .
Five (5) rainbow or brown trout, small mouth bass, or a combination of these, each day or in possession, regardless of whether they are fresh, stored in an ice chest, or otherwise preserved. The combined total must not exceed five fish.
Twenty (20) rockbass may be kept in addition to the above limit.
A person must stop fishing immediately after obtaining the limit.
Rainbow and Brown Trout: 7" minimum
All trout or smallmouth bass caught less than the legal length shall be immediately returned to the water from which it was taken. Any brook trout caught must be immediately returned unharmed to the water.
Lures, Bait and Equipment
Tennessee: 1-800-332-0900 or 1-800-255-8972
Poaching robs fishermen of fish and all citizens of a valuable natural heritage. You can help by reporting incidents when you see them. Remember, you will remain anonymous. Record vehicle description and license plate number if possible.
There are many ways to enjoy the Smokies as there are personal preferences for doing so. No other place has such a diversity of fun...whether your hiking, camping, backpacking, horseback riding, fishing, picnicking, or enjoying the local attractions in Gatlinburg, Pigeon Forge.
Toll Free 800-747-0713 | http://www.knoxville-tn.com/smoky.html | 13 |
89 | ROOTS OF POLYNOMIALS
OF DEGREE GREATER THAN 2
We saw in that topic what is called the factor theorem.
This means that if a polynomial can be factored, for example, as follows:
P(x) = (x − 1)(x + 2)(x + 3)
then the theorem tells us that the roots are 1, −2, and −3.
Conversely, if we know that roots of a polynomial are −2, 1, and 5, then the polynomial has the following factors:
(x + 2)(x − 1)(x − 5).
We could then multiply out and know the polynomial that has those three roots.
We will see below how to prove the factor theorem .
a) Use the Factor Theorem to prove: (x + 1) is a factor of x5 + 1.
To see the answer, pass your mouse over the colored area.
−1 is a root of x5 + 1. For, (−1)5 + 1 = −1 + 1 = 0.
b) Use synthetic division to find the other factor.
Therefore, x5 + 1 = (x + 1)(x4 − x3 + x2 − x + 1)
Following this same procedure, we could prove:
(x + a) is a factor of x5 + a5,
and completely generally:
(x + a) is a factor of xn + an, where n is odd.
The Fundamental Theorem of Algebra
The following is called the Fundamental Theorem of Algebra:
A polynomial of degree n has at least one root, real or complex.
This apparently simple statement allows us to conclude:
A polynomial P(x) of degree n has exactly n roots, real or complex.
If the leading coefficient of P(x)is 1, then the Factor Theorem allows us to conclude:
P(x) = (x − rn)(x − rn − 1). . . (x − r2)(x − r1)
Hence a polynomial of the third degree, for example, will have three roots. And if they are all real, then its graph will look something like this:
For, the three roots are the three x-intercepts.
Note: If we imagine that the graph begins to the left of the y-axis, then this graph begins below the x-axis. Why? Because in any polynomial, the leading term eventually will dominate. If the leading term is positive and the polynomial is of odd degree, then when x is a large negative number -- that is, far to the left of the origin -- then an odd power of a negative number is itself negative. The graph will be below the x-axis.
As for a polynomial of the fourth degree, it will have four roots. And if they are all real, then its graph will look something like this:
Here, the graph on the far left is above the x-axis. For when the polynomial is of even degree (and the leading coefficient is positive), then an even power of a negative number will be positive. The graph will be above the x-axis.
Example 1. Write the polynomial with integer coefficients that has the following roots: −1, ¾.
Solution. Since −1 is a root, then (x + 1) is a factor. As for the root ¾, we would have the solution
The factors are (4x − 3)(x + 1).
The polynomial is 4x2 + x − 3.
Problem 2. Determine the polynomial whose roots are −1, 1, 2, and sketch its graph.
The factors are (x + 1)(x − 1)(x − 2). On multiplying out, the polynomial is (x2 − 1)(x − 2) =
x3 − 2x2 − x + 2.
Here is the graph:
The y-intercept is the constant term 2. In every polynomial the y-intercept is the constant term, because the constant term is the value of y when x = 0.
Problem 3. Determine the polynomial with integer coefficients whose roots are −½, −2, −2, and sketch the graph.
The factors are (2x + 1)(x + 2)2. On multiplying out, the polynomial is (2x + 1)(x2 + 4x + 4) =
2x3 + 9x2 + 12x + 4.
Here is the graph:
−2 is a double root. The graph does not cross the x-axis.
Question. If r is a root of a polynomial p(x), then upon dividing p(x) by x − r, what remainder should you expect?
0. Since r is a root, then x − r is a factor of p(x).
Problem 4. Is x = 2 a root of this polynomial:
x6 − 3x5 + 3x4 − 3x3 + 3x2 −3x + 2 ?
Use synthetic division to divide the polynomial by x − 2, and look at the remainder.
The remainder is 0. 2 is a root of the polynomial.
Example 2. Find the three roots of
P(x) = x3 − 2x2 − 9x + 18,
given that one root is 3.
Solution. Since 3 is a root of P(x), then according to the factor theorem, x − 3 is a factor. Therefore, on dividing P(x) by x − 3, we can find the other, quadratic factor.
The three roots are: 2, −3, 3.
Again, since x − 3 is a factor of P(x), the remainder is 0.
Problem 5. Sketch the graph of this polynomial,
y = x3 − 2x2 − 5x + 6,
given that one root is −2.
Since −2 is a root, then (x + 2) is a factor. To find the other, quadratic factor, divide the polynomial by x + 2. Note that the root −2 goes in the box:
The three roots are: 1, 3, −2. Here is the graph:
A strategy for finding roots
What, then, is a strategy for finding the roots of a polynomial of degree n > 2?
We must be given, or we must guess, a root r. We can then divide the polynomial by x − r, and hence produce a factor of the polynomial that will be one degree less. If we can discover a root of that factor, we can continue the process, reducing the degree each time, until we reach a quadratic, which we can always solve.
Here is a theorem that will help us guess a root.
The integer root theorem. If an integer is a root of a polynomial whose coefficients are integers and whose leading coefficient is ±1, then that integer is a factor of the constant term.
We will prove this below.
This Integer Root Theorem is an instance of the more general Rational Root Theorem:
If the rational number r/s is a root of a polynomial whose coefficients are integers, then the integer r is a factor of the constant term, and the integer s is a factor of the leading coefficient.
Example 3. What are the possible integer roots of x3 − 4x2 + 2x + 4?
Answer. If there are integer roots, they will be factors of the constant term 4; namely: ±1, ±2, ±4.
Now, is 1 a root? To answer, we will divide the polynomial by x − 1, and hope for remainder 0.
The remainder is not 0. 1 is not a root. Let's try −1:
The remainder again is not 0. Let's try 2:
Yes! 2 is a root. We have
x3 − 4x2 + 2x + 4 = (x2 − 2x − 2)(x − 2)
We can now find the roots of the quadratic by completing the square. As we found in Topic 11:
x = 1 ±
Therefore, the three roots are:
1 + , 1 − , 2.
a) What are the possible integer roots of this polynomial?
x3 − 2x2 − 3x + 1
±1. They are the only factors of the constant term.
b) Does that polynomial have integer roots?
No, because neither 1 nor −1 will make that polynomial equal to 0. Synthetic division by both ±1 does not give remainder 0.
Problem 7. Factor this polynomial into a product of linear factors.
x3 + 2x2 − 5x − 6
We must find the roots. The possible integer roots are ±1, ±2, ±3, ±6. Synthetic division reveals that −1 is a root.
Example 4. A polynomial P(x) has the following roots:
−2, 1 + , 5i.
What is the smallest degree that P(x) could have?
Answer. 5. For, since 1 + is a root, then so is its conjugate, 1 − . And since 5i is a root, so is its conjugate, −5i.
P(x) has at least these 5 roots:
−2, 1 ± , ±5i.
Problem 8. Construct a polynomial that has the following root:
a) 2 +
Since 2 + is a root, then so is 2 − . Therefore, according to the theorem of the sum and product of the roots (Topic 10), they are the roots of x2 − 4x + 1. .
b) 2 − 3i
Since 2 − 3i is a root, then so is 2 + 3i. Again, according to the theorem of the sum and product of the roots, they are the roots of x2 − 4x + 13. See Topic 10, Example 7.
Problem 9. Construct a polynomial whose roots are 1 and 5i.
Since 5i is a root, then so is its conjugate, −5i. They will be the roots of a quadratic factor of the polynomial. The sum of those roots is 0. The product is 25. Therefore the quadratic factor is (x2 + 25).
Next, since 1 is a root, then (x − 1) is a factor. Therefore the polynomial is
(x − 1)(x2 + 25) = x3 − x2 + 25x − 25.
Problem 10. Let f(x) = x5 + x4 + x3 + x2 − 12x − 12. One root is and another is −2i.
If f(x) has integer roots, how many could it have?
One. This is a polynomial of the 5th degree, and has 5 roots. Two are and −. And two are 2i and −2i.
Problem 11. Is it possible for a polynomial of the 5th degree to have 2 real roots and 3 imaginary roots?
No, it is not. Since imaginary roots always come in pairs, then if there are any imaginary roots, there will always be an even number of them.
Consider the graph of a 5th degree polynomial with positive leading term. When x is a large negative number, the graph is below the x-axis. When x is a large positive number, it is above the x-axis. Therefore, the graph must cross the x-axis at least once. Now, can you draw the graph so that it crosses the x-axis exactly twice? No, you cannot. A polynomial of odd degree must have an odd number of real roots.
Proof of the factor theorem
x − r is a factor of a polynomial P(x)
First, if (x − r) is a factor of P(x), then P(r) will have the factor (r − r), which is 0. This will make P(r) = 0. This means that r is a root.
Conversely, if r is a root of P(x), then P(r) = 0. But according to the remainder theorem, P(r) = 0 means that upon dividing P(x) by x − r, the remainder is 0. x − r, therefore, is a factor of P(x).
This is what we wanted to prove.
Proof of the integer root theorem
If an integer is a root of a polynomial whose coefficients are integers
Let the integer r be a root of this polynomial:
P(x) = ±xn + an−1xn−1 + an−2xn−2 + . . . + a2x2 + a1x + a0,
where the a's are integers. Then, since r is a root,
P(r) = ±rn + an−1rn−1 + an−2rn−2 + . . . + a2r2 + a1r + a0 = 0.
Transpose the constant term a0, and factor r from the remaining terms:
r(±rn−1 + an−1rn−2 + . . . + a2r + a1) = −a0
Now the a's are all integers; therefore the expression in parentheses is an integer, which, for convenience, we will call −q:
r(−q) = −a0,
rq = a0.
Thus, the constant term a0 can be factored as rq, if r and q are both integers. Under those conditions, then, r is a factor of the constant term.
This is what we wanted to prove.
Please make a donation to keep TheMathPage online.
Copyright © 2012 Lawrence Spector
Questions or comments? | http://www.themathpage.com/aPreCalc/factor-theorem.htm | 13 |
58 | Like all objects that do or can move, NASA Sounding Rockets obey Newton's Laws of Motion. The following text provides a very basic summary of the three laws of motion.
External Forces - Let's start with the basics. A given object can be acted on by more than one force. An apple hanging on a tree is acted upon by two basic forces: 1) the downward force exerted by the gravity force and, 2) the upward force exerted by the tree limb. The downward gravity force is also know as "weight".
Balanced Forces - Forces can be generated in other ways as well. The two moving men depicted in the figure below are applying two external forces on the box. If both are pushing with the same force, but in opposite directions, the resultant force acting on the box is zero.
Illustration of Balanced Forces at work
From this example, we can see that forces have a magnitude and a direction. Quantities that have both MAGNITUDE and DIRECTION are known as VECTORS. In the case of the two moving men, One can be defined to be pushing in the positive direction while the other is pushing in the negative direction. If both are pushing with a force of 10 Newtons, one is pushing with a -10 Newton force and the other is pushing with a +10 Newton force. Adding these two vector quantities together results in a resultant force of zero.
Force exerted by the mover on the left = +F
Force exerted by the mover on the right = -F
If we add the two "equal", but "opposite" forces generated by the two movers, the total force is zero.
Total Force = (+F) + (-F) = 0
The end result may be a lot of sweat on the part of both moving men, but the box isn't going to budge! This may become mathematically obvious once we discuss Newton's Second Law of Motion.
Imbalanced Forces - If the mover on the right takes a break, the force becomes unbalanced. Since an external force now exists, the box will begin to move.
This makes intuitive sense, but Netwon's 2nd Law will show it from a mathematical perspective. Now that we have been reminded of the concept of "external forces" we can now move on to the underlying concept of the First Law - Inertia
The First Law is also referred to as the principle of inertia, which is an object's apparent resistance to a change in motion. The word "inertia" is Latin for "laziness" or "idleness". This is a fitting term since the first law relates to and object's "lack of desire" to change its own motion. As stated earlier, objects in motion "want" to stay in motion and objects at rest "want" to stay at rest unless some external force prompts them to change their speed or direction. External forces include friction, gravity or some other pushing or pulling force. These forces can be generated by a baseball bat, rocket motor, pressure imbalance, wind, or a million other things.
Newton's 1st law of motion demonstrates the difference between the Aristotelian worldview and that of Newton. Aristotle assumed that everything had its "natural" position. A rock's natural position is at the center of the Earth, which is why it falls downward when it is dropped. Water's natural position is on the surface of the Earth and that's why it forms pools, puddles and lakes on top of the rock. Air's natural position is above the ground and hence it floats above the surface of the Earth. Newton's 1st law implies that objects have no "natural" position. In the absence of the force of gravity, a rock thrown into the air will move away from the center of the Earth and never fall back. In the absence of the force of gravity, air, rocks and water would float around together and they would have no "natural" position.
The inertia of an object depends on the object's mass. Heavy objects have a high inertia while lighter objects have a low inertia. Since weight is related to the amount of matter contained within an object, more matter equates to more inertia. For example, a mouse consists of a very small amount of matter, and as a result, has a very small inertia. An elephant on the other hand, is composed of a large amount of matter and has a corresponding high inertia. Obviously it is much harder to stop a moving elephant than it is to stop a moving mouse. If you have doubts about this, try stopping a mouse rolling down a hill on a skateboard and then try to stop an elephant on his skateboard. This experiment will make the concept of inertia "painfully" obvious.
Newton's first law of motion is essentially the basis for all motion. It applies to the trajectory (flight path) of a baseball after it is hit as well as the flight of a rocket.
For example, when the rocket is sitting on the launch pad (Figure A), it's weight is exactly opposed by the upward "supporting" force generated by the launch pad. These diametrically opposed forces result in a net external force of zero. Since the rocket is initially at rest and no net external force exists, it will stay at rest. Once the rocket motor is ignited (Figure B), the external force is no longer balanced and the rocket begins to move. In this case, the external thrusting force overcomes the rocket's "desire" to say motionless.
Most people know this law as "force is equal to the mass times acceleration". As simple as this law sounds, it has some very profound implications. It states that if we know the net external force acting on the body, we can calculate the acceleration of that body.
In its basic form, the equation
F=ma probably isn't going to quite register with people. However, if
we apply a little algebra, we can transform the equation into a more usable form:
Step 1 - Divide each side by Mass (M)
F / M = (M x a) / M
Step 2 - Cancel the Mass terms on the right hand side
F / M = (
M x a) / M
Step 3 - Swap the sides of the equality
a = F / M
This new form of the second law tells us the acceleration of an object is equal to the force acting on the object divided by the object's mass.
Note that this law also deals with "net external forces" and if the net external force is zero, the object will have zero acceleration. This means the object will not speed up or slow down.
Newton's 2nd law is the basis for rocket trajectory simulations. The velocity of the rocket can be
obtained by "integrating" the area under the acceleration curve. The acceleration curve
(a = F / M) is
defined by the weight, thrust, and drag of the rocket at any given time. Integrating the area under the
velocity curve yields the distance the rocket has moved. A detailed dissertation of this process will
be deferred to a later time.
The second law can be used to predict how fast a falling ball will be traveling just before it hits the ground and how long it will take to fall. By expanding the "degrees of freedom" of the example and adding appropriate force vectors, we can use Newton's 2nd law to predict the flight path of a rocketship.
Most people know Newton's Third Law as the law of "action and reaction". When one object (i.e. a girl) pushes on another object (i.e. a boy), the first object moves in one direction and the second object moves in the opposite direction.
Some scientists do not like this generalized definition of the third law because it tends to focus on the resulting motions rather than the actual forces involved in the phenomena. This general statement also implies that one event (the "action") is more important than, and occurs before, the other (the "reaction"). This may appear to be the case when we consider a child pulling a wagon, but when we consider the case of a thrusting rocket motor, it is not clear if the rocket motor is pushing on the escaping gas or the escaping gas is pushing on the rocket motor. In reality, both actions occur at the same time and it is theoretically impossible to distinguish which object is actually applying the force. As such, it may be better to state the third law in the following manner:
"When one object exerts a force on a second object, the second object exerts an equal and opposite force on the first".
Purists refer to this as the "law of interaction", because the forces involved interact with each of the bodies.
This law applies to all objects that are undergoing an acceleration. The action of the rocket motor causes the rocket to move in the opposite direction. The child's foot pushing off the ground causes her skateboard to move in the opposite direction.
Newton's second law ties into the third law because we usually equate the action/reaction to the resulting motion that is created as the result of the applied forces. If we know the applied force between the objects and the mass of those objects, we can predict the resulting motion. | http://sites.wff.nasa.gov/code810/edu_newton.html | 13 |
59 | Discrete vs Continuous Probability Distributions
Statistical experiments are random experiments that can be repeated indefinitely with a known set of outcomes. A variable is said to be a random variable if it is an outcome of a statistical experiment. For example, consider a random experiment of flipping a coin twice; the possible outcomes are HH, HT, TH, and TT. Let the variable X be the number of heads in the experiment. Then, X can take the values 0, 1 or 2, and it is a random variable. Observe that there is a definite probability for each of the outcomes X = 0, X = 1, and X = 2.
Thus, a function can be defined from the set of possible outcomes to the set of real numbers in such a way that ƒ(x) = P(X=x) (the probability of X being equal to x) for each possible outcome x. This particular function f is called the probability mass/density function of the random variable X. Now the probability mass function of X, in this particular example, can be written as ƒ(0) = 0.25, ƒ(1) = 0.5, ƒ(2) = 0.25.
Also, a function called cumulative distribution function (F) can be defined from the set of real numbers to the set of real numbers as F(x) = P(X ≤x) (the probability of X being less than or equal to x) for each possible outcome x. Now the cumulative distribution function of X, in this particular example, can be written as F(a) = 0, if a<0; F(a) = 0.25, if 0≤a<1; F(a) = 0.75, if 1≤a<2; F(a) = 1, if a≥2.
What is a discrete probability distribution?
If the random variable associated with the probability distribution is discrete, then such a probability distribution is called discrete. Such a distribution is specified by a probability mass function (ƒ). The example given above is an example of such a distribution since the random variable X can have only a finite number of values. Common examples of discrete probability distributions are binomial distribution, Poisson distribution, Hyper-geometric distribution and multinomial distribution. As seen from the example, cumulative distribution function (F) is a step function and ∑ ƒ(x) = 1.
What is a continuous probability distribution?
If the random variable associated with the probability distribution is continuous, then such a probability distribution is said to be continuous. Such a distribution is defined using a cumulative distribution function (F). Then it is observed that the probability density function ƒ(x) = dF(x)/dx and that ∫ƒ(x) dx = 1. Normal distribution, student t distribution, chi squared distribution, and F distribution are common examples for continuous probability distributions.
What is the difference between a discrete probability distribution and a continuous probability distribution?
• In discrete probability distributions, the random variable associated with it is discrete, whereas in continuous probability distributions, the random variable is continuous.
• Continuous probability distributions are usually introduced using probability density functions, but discrete probability distributions are introduced using probability mass functions.
• The frequency plot of a discrete probability distribution is not continuous, but it is continuous when the distribution is continuous.
• The probability that a continuous random variable will assume a particular value is zero, but it is not the case in discrete random variables. | http://www.differencebetween.com/difference-between-discrete-and-vs-continuous-probability-distributions/ | 13 |
72 | The shortest path between two points on a plane is a straight line. On the surface of a sphere, however, there are no straight lines. The shortest path between two points on the surface of a sphere is given by the arc of the great circle passing through the two points. A great circle is defined to be the intersection with a sphere of a plane containing the center of the sphere.
Two great circles
If the plane does not contain the center of the sphere, its intersection with the sphere is known as a small circle. In more everyday language, if we take an apple, assume it is a sphere, and cut it in half, we slice through a great circle. If we make a mistake, miss the center and hence cut the apple into two unequal parts, we will have sliced through a small circle.
Two small circles
If we wish to connect three points on a plane using the shortest possible route, we would draw straight lines and hence create a triangle. For a sphere, the shortest distance between two points is a great circle. By analogy, if we wish to connect three points on the surface of a sphere using the shortest possible route, we would draw arcs of great circles and hence create a spherical triangle. To avoid ambiguities, a triangle drawn on the surface of a sphere is only a spherical triangle if it has all of the following properties:
The figure below shows a spherical triangle, formed by three intersecting great circles, with arcs of length (a,b,c) and vertex angles of (A,B,C).
Note that the angle between two sides of a spherical triangle is defined as the angle between the tangents to the two great circle arcs, as shown in the figure below for vertex angle B.
The rotation of the Earth on its axis presents us with an obvious means of defining a coordinate system for the surface of the Earth. The two points where the rotation axis meets the surface of the Earth are known as the north pole and the south pole and the great circle perpendicular to the rotation axis and lying half-way between the poles is known as the equator. Great circles which pass through the two poles are known as meridians and small circles which lie parallel to the equator are known as parallels or latitude lines.
The latitude of a point is the angular distance north or south of the equator, measured along the meridian passing through the point. A related term is the co-latitude, which is defined as the angular distance between a point and the closest pole as measured along the meridian passing through the point. In other words, co-latitude = 90° - latitude.
Distance on the Earth's surface is usually measured in nautical miles, where one nautical mile is defined as the distance subtending an angle of one minute of arc at the Earth's center. A speed of one nautical mile per hour is known as one knot and is the unit in which the speed of a boat or an aircraft is usually measured.
Humans perceive in Euclidean space -> straight lines and planes. But, when distances are not visible (i.e. very large) than the apparent shape that the mind draws is a sphere -> thus, we use a spherical coordinate system for mapping the sky with the additional advantage that we can project Earth reference points (i.e. North Pole, South Pole, equator) onto the sky. Note: the sky is not really a sphere!
From the Earth's surface we envision a hemisphere and mark the compass points on the horizon. The circle that passes through the south point, north point and the point directly over head (zenith) is called the meridian.
This system allows one to indicate any position in the sky by two reference points, the time from the meridian and the angle from the horizon. Of course, since the Earth rotates, your coordinates will change after a few minutes.
The horizontal coordinate system (commonly referred to as the alt-az system) is the simplest coordinate system as it is based on the observer's horizon. The celestial hemisphere viewed by an observer on the Earth is shown in the figure below. The great circle through the zenith Z and the north celestial pole P cuts the horizon NESYW at the north point (N) and the south point (S). The great circle WZE at right angles to the great circle NPZS cuts the horizon at the west point (W) and the east point (E). The arcs ZN, ZW, ZY, etc, are known as verticals.
The two numbers which specify the position of a star, X, in this system are the azimuth, A, and the altitude, a. The altitude of X is the angle measured along the vertical circle through X from the horizon at Y to X. It is measured in degrees. An often-used alternative to altitude is the zenith distance, z, of X, indicated by ZX. Clearly, z = 90 - a. Azimuth may be defined in a number of ways. For the purposes of this course, azimuth will be defined as the angle between the vertical through the north point and the vertical through the star at X, measured eastwards from the north point along the horizon from 0 to 360°. This definition applies to observers in both the northern and the southern hemispheres.
It is often useful to know how high a star is above the horizon and in what direction it can be found - this is the main advantage of the alt-az system. The main disadvantage of the alt-az system is that it is a local coordinate system - i.e. two observers at different points on the Earth's surface will measure different altitudes and azimuths for the same star at the same time. In addition, an observer will find that the star's alt-az coordinates changes with time as the celestial sphere appears to rotate. Despite these problems, most modern research telescopes use alt-az mounts, as shown in the figure above, owing to their lower cost and greater stability. This means that computer control systems which can transform alt-az coordinates to equatorial coordinates are required.
The celestial sphere has a north and south celestial pole as well as a celestial equator which are projected reference points to the same positions on the Earth surface. Right Ascension and Declination serve as an absolute coordinate system fixed on the sky, rather than a relative system like the zenith/horizon system. Right Ascension is the equivalent of longitude, only measured in hours, minutes and seconds (since the Earth rotates in the same units). Declination is the equivalent of latitude measured in degrees from the celestial equator (0 to 90). Any point of the celestial (i.e. the position of a star or planet) can be referenced with a unique Right Ascension and Declination.
The celestial sphere has a north and south celestial pole as well as a celestial equator which are projected from reference points from the Earth surface. Since the Earth turns on its axis once every 24 hours, the stars trace arcs through the sky parallel to the celestial equator. The appearance of this motion will vary depending on where you are located on the Earth's surface.
Note that the daily rotation of the Earth causes each star and planet to make a daily circular path around the north celestial pole referred to as the diurnal motion.
Equatorial Coordinate System :
Because the altitude and azimuth of a star are constantly changing, it is not possible to use the horizontal coordinate system in a catalog of positions. A more convenient coordinate system for cataloging purposes is one based on the celestial equator and the celestial poles and defined in a similar manner to latitude and longitude on the surface of the Earth. In this system, known as the equatorial coordinate system, the analog of latitude is the declination, δ. The declination of a star is its angular distance in degrees measured from the celestial equator along the meridian through the star. It is measured north and south of the celestial equator and ranges from 0° at the celestial equator to 90° at the celestial poles, being taken to be positive when north of the celestial equator and negative when south. In the figure below, the declination of the star X is given by the angle between Y and X.
The analog of longitude in the equatorial system is the hour angle, H (you may also see the symbol HA used). Defining the observer's meridian as the arc of the great circle which passes from the north celestial pole through the zenith to the south celestial pole, the hour angle of a star is measured from the observer's meridian westwards (for both northern and southern hemisphere observers) to the meridian through the star (from 0° to 360°). Because of the rotation of the Earth, hour angle increases uniformly with time, going from 0° to 360° in 24 hours. The hour angle of a particular object is therefore a measure of the time since it crossed the observer's meridian - hence the name. For this reason it is often measured in hours, minutes and seconds of time rather than in angular measure (just like longitude). In figure above, the hour angle of the star X is given by the angle Z-NCP-X. Note that all stars attain their maximum altitude above the horizon when they transit (or attain upper culmination on, in the case of circumpolar stars) the observers meridian.
The declination of a star does not change with time. The hour angle does, and hence it is not a suitable coordinate for a catalogue. This problem is overcome in a manner analogous to the way in which the Greenwich meridian has been (arbitrarily) selected as the zero point for the measurement of longitude. The zero point chosen on the celestial sphere is the first point of Aries, γ, and the angle between it and the intersection of the meridian through a celestial object and the celestial equator is called the right ascension (RA) of the object. Right ascension is sometimes denoted by the Greek letter α and is measured from 0h to 24h along the celestial equator eastwards (in the direction of a right-handed screw motion about the direction to the north celestial pole) from the first point of Aries, that is, in the opposite direction to that in which hour angle is measured. Like the definition of hour angle, this convention holds for observers in both northern and southern hemispheres. In above figure, the right ascension of the star X is given by the angle -NCP-Y.
Most modern research telescopes do not use equatorial mounts due to their higher cost and lower stability. This is at the expense of the simplicity of telescope tracking - an equatorially-mounted telescope need only move its right ascension axis in order to track the motion of the celestial sphere. The figure above shows an example of an equatorially-mounted telescope.
Drawn onto the celestial sphere are imaginary shapes called constellations, Latin for `group of stars'. Due to the nature of the Earth's surface, the sky is divided into the northern and southern sky as seen from each hemisphere.
The origin of the names of particular constellations is lost with time, dating back before written records. The ancient Greeks were the first to record the oral legends. But the boundaries of the constellations were fixed by the International Astronomical Union in 1928. For many of the constellations it is easy to see where they got their names. For example,
In all, there are 88 constellation names cataloged by Hipparchus in 100 B.C. To find out more about your favorite constellation, goto Constellation of the Month. Since the boundaries are fixed, a star will always remain in a constellation unless its proper motion moves it into another.
Hipparchus also developed a simply method of identifying the stars in the sky by using a letter from the Greek alphabet combined with the constellation name.
So, for example, the brightest star in the constellation Orion is Alpha Orion, the second brightest star is Beta Orion, and so on. When the letters run out, we use a number 33 Orion, 101 Orion, etc. Some of the very brightest stars have their own names due to their importance to early navigators. For example, Alpha Canis Major is Sirius, the Dog Star.
About 6000 stars are visible with the naked eye on a dark, moonless night. However, there are over 1013 stars in the whole Milky Way galaxy were the solar system resides. Thus, we only see a very small fraction of the closest and brightest stars with our eyes. The first star catalog was published by Ptolemy in the 2nd century. It contained the positions of 1025 of the brightest stars in the sky. The first modern star catalog was the Bonner Durchmusterung by Argelander in 1860, containing 320,000 stars.
Since the Earth's axis is tilted 23 1/2 degrees from the plane of our orbit around the Sun, The apparent motion of the Sun through the sky during the year is a circle that is inclined 23 1/2 degrees from the celestial equator. This circle is called the ecliptic and passes through 12 of the 88 constellations that we call the zodiac.
Equinox and Solstice:
The projection of the Sun's path across the sky during the year is called the ecliptic. The points where the ecliptic crosses the celestial equator are the vernal and autumnal equinox's. The point were the Sun is highest in the northern hemisphere is called the summer solstice. The lowest point is the winter solstice.
Days are longest in the summer for the northern hemisphere due to tilt of the Earth's axis allowing for more sunlight to be projected onto surface. Note also the reason for the "midnight" sun at the North Pole in summer. Longest day of the year is at the summer solstice
For opposite reasons, days are short and nights long in the winter.
The seasons are caused by the angle the sun's rays make with the ground. Higher Sun angle means more luminosity per square meter. Low Sun angle produces fewer rays per square meter. More intensity means more heat and, therefore, higher temperatures.
Note that, due to the fact that our oceans store heat, the actual changes in mean Earth temperature are delayed by several weeks, i.e. the hottest days of summer are usually in late July, over a month from the summer solstice.
Sidereal and Synodic time:
A `day' is defined by the rotation of object in question. For example, the Moon's `day' is 27 Earth days.
A `year' is defined by the revolution of object in question. For example, the Earth's year is 365 days divided into months; whereas, Pluto's `year' is 248.6 Earth years.
Typical we use synodic time, which means with respect to the Sun, in our everyday life. For example, noon, midnight, twilight are all examples of synodic time based on where the Sun is in the sky (e.g. directly overhead on the equator for noon). Astronomers often use sidereal time, which means time with respect to the stars, for their measurements.
Since the Earth moves around the Sun once every 365 days, the Sun's apparent position in the sky changes from day to day.
Phases of the Moon:
The Moon is tidally locked to the Earth, meaning that one side always faces us (the nearside), whereas the farside is forever hidden from us. In addition, the Moon is illuminated on one side by the Sun, the other side is dark (night).
Which parts are illuminated (daytime) and which parts we see from the Earth are determined by the Moon's orbit around the Earth, what is called the phase of the Moon (click here for the current phase of the Moon).
As the Moon moves counterclockwise around the Earth, the daylight side becomes more and more visible (i.e. we say the Moon is `waxing'). After full Moon is reached we begin to see more and more of the nighttime side (i.e. we say the Moon is `waning'). This whole monthly sequence is called the phases of the Moon.
On rare occasions the Moon comes between the Earth and the Sun (a solar eclipse) or the Moon enters the Earth's shadow (a lunar eclipse). | http://abyss.uoregon.edu/~js/ast121/lectures/lec03.html | 13 |
79 | Return to IB Physics
Topic 2: Mechanics
2.1.1 Define displacement, velocity, speed and acceleration.
|Symbol||Definition||SI Unit||Vector or Scalar?|
|Displacement||s||The distance moved in a particular direction||m||Vector|
|Velocity||v or su||The rate of change of displacement. Velocity = change of displacement over time taken||m s-1||Vector|
|Speed||v or u||The rate of change of distance. Speed = distance gone over time taken||m s-1||Scalar|
|Acceleration||a||The rate of change of velocity. Acceleration = change of velocity over time taken||m s-2||Vector|
- Vector quantities always have a direction associated with them.
2.1.2 Define and explain the difference between instantaneous and average values of speed, velocity and acceleration.
- Average value - over a period of time.
- Instantaneous value - at one particular time.
2.1.3 Describe an object's motion from more than one frame of reference.
Graphical representation of motion
2.1.4 Draw and analyse distance–time graphs, displacement–time graphs, velocity–time graphs and acceleration–time graphs.
2.1.5 Analyse and calculate the slopes of displacement–time graphs and velocity – time graphs, and the areas under velocity–time graphs and acceleration–time graphs. Relate these to the relevant kinematic quantity.
Uniformly accelerated motion
Determine the velocity and acceleration from simple timing situations
Derive the equations for uniformly accelerated motion.
Describe the vertical motion of an object in a uniform gravitational field.
Describe the effects of air resistance on falling objects.
Solve problems involving uniformly accelerated motion.
Forces and Dynamics (2.2)
Forces and free-body diagrams
Newton’s first law
Newton's First Law of Physics states that in the absence of a resultant force, a body will remain with its state of motion.
Equilibrium is the condition of a system in which competing influences (such as forces) are balanced.
Newton’s second law
ΣF = ma
Alternately: ΣF = Δp/Δt In words, the resultant force is all that matters in the second law. The direction of motion depends on the direction of the resultant force.
Newton’s third law
If body A exerts a force on body B, then body B exerts an equal and opposite force on body A.
Inertial Mass, Gravitational Mass and Weight (2.3)
An object's inertial mass is defined as the ratio of the applied force F, to its acceleration, a.
State Newton's first law of motion (2.2.4)
In ancient times, Aristotle had maintained that a force is what is required to keep a body in motion. The higher the speed, the larger the force needed. Aristotle's idea of force is not unreasonable and is in fact in accordance with experience from everyday life: It does require a force to push a piece of furniture from one corner of a room to another. What Aristotle failed to appreciate is that everyday life is plagued by friction. An object in motion comes to rest because of friction and thus a force is required if it is to keep moving. This force is needed in order to cancel the force of friction that opposes the motion. In an idealized world with no friction, a body that is set into motion does not require a force to keep it moving. Galileo, 2000 years after Aristotle, was the first to realize that the state of no motion and the state of motion with constant speed in a straight line are indistinguishable from each other. Since no force is present in the case of no motion, no forces are required in the case of motion in a straight line with constant speed either. Force is related to changes in velocity (i.e. acceleration)
Newton's first law (generalizing Galileo's statements) states the following:
When no forces act on a body, that body will either remain at rest or continue to move along a straight line at constant speed.
A body that moves with acceleration (i.e. changing speed or changing direction of motion) must have a force acting on it. An ice hockey puck slides on ice with practically no friction and will thus move with constant speed in a straight line. A spacecraft leaving the solar system with its engines off has no force acting on it and will continue to move in a straight line at constant speed (until it encounters another body that will attract or hit it). Using the first law, it is easy to see if a force is acting on a body. For example, the earth rotates around the sun and thus we know at once that a force must be acting on the Earth.
Newton's first law is also called the law of Inertia
Inertia is the reluctance of a body to change its state of motion. Inertia keeps the body in the same state of motion when no forces act on the body. When a car accelerates forward, the passengers are thrown back into their seats. If a car brakes abruptly, the passengers are thrown forward. This implies that a mass tends to stay in the state of motion it was in before the force acted on it. The reaction of a body to a change in its state of motion is inertia.
A well-known example of inertia is that of a magician who very suddenly pulls the tablecloth off a table leaving all the plates, glasses, etc., behind on the table. The inertia of these objects make them 'want' to stay on the table where they are. Similarly, if you pull very suddenly on a roll of kitchen paper you will tear off a sheet. But if you pull gently you will only succeed in making the paper roll rotate.
Work, Energy and Power (2.5)
Work refers to an activity involving a force and movement along the direction of the force. It is a scalar quantity that is measured in Joules (Newton meters in SI units) which can be defined as:
Work done= F×s×cosθ
Where F is the force applied to the object, s is the displacement of the object and cosθ is the cosine of the angle between the force and the displacement. In a linear example (with the force being exerted in the same direction as the displacement), the cosθ is equal to 1 and the equation simplifies to .
Example calculation: If a force of 20 newtons pushes an object 5 meters in the same direction as the force what is the work done?
F= 20 N s=5 m W=F×s=20×5= 100 J 100 Joules of work is done
Examples (when is work done?):
Force making an object move faster (accelerating) Lifting an object up (moving it to a higher position in the gravitational field) Compressing a spring
When is work not done
When there is no force Object moving at a constant speed Object not moving
Some useful equations;
If an object is being lifted vertically the work done to it can be calculated using the equation
Work done= mgh
Where m is the mass in kilograms, g is the earth's gravitational field strength (10N kg-1), and h is the height in meters
Work done in compressing or extending a spring
Work done = ½ kx2
Where k is Hooke's constant and x is the displacement
Energy and Power
Energy is the capacity for doing work. The amount of energy you transfer is equal to the work done. Energy is a measure of the amount of work done, this means that the units for energy and work must be the same- joules. Energy is like the "currency" for performing work. To do 100 joules of work, you must expend 100 joules of energy.
Conservation of energy
In any situation the change in energy must be accounted for. If it is 'lost' by one object it must be gained by another. This is the principle of conservation of energy which can be stated in several ways:
- The total overall energy of a closed system must be constant
- Energy is neither created or destroyed, it just changes form.
- there is no change in the total energy of the universe
Energy can be in many different types these include:
- Kinetic energy, Gravitational potential energy, Elastic potential energy, Electrostatic potential energy, Thermal energy, Electrical energy, Chemical energy, Nuclear energy, Internal energy, Radiant energy, Solar energy, and light energy.
You will need equations for the first three
- Kinetic energy = ½ mv2 where m is the mass in kg, v is the velocity (in ms-1)
- Gravitational potential energy= mgh where m is the mass in kg, g is the gravitational field strength, and h is the change in height
- Elastic potential energy =½ kx2 where k is the spring constant and x is the extension
Power -measured in Watts (W) or Joules per second (Js-1)- is the rate of doing work or the rate at which energy is transferred.
Power= energy transferred÷time taken= energy transferred÷time taken
If something is moving at a constant velocity v against a constant frictional force f, the power P needed is P= fv
If you do 100 joules of work in one second (using 100 joules of energy), the power is 100 watts.
Efficiency is the ratio of useful energy to the total energy transferred .
The change in the kinetic energy of an object is equal to the net work done on the object.
This fact is referred to as the Work-Energy Principle and is often a very useful tool in mechanics problem solving. It is derivable from conservation of energy and the application of the relationships for work and energy, so it is not independent of the conservation laws. It is in fact a specific application of conservation of energy. However, there are so many mechanical problems which are solved efficiently by applying this principle that it merits separate attention as a working principle.
For a straight-line collision, the net work done is equal to the average force of impact times the distance traveled during the impact.
Average impact force x distance traveled = change in kinetic energy
If a moving object is stopped by a collision, extending the stopping distance will reduce the average impact force.
Uniform Circular Motion (2.6)
The centripetal force with constant velocity , at a distance from the center is defined as: | http://en.m.wikibooks.org/wiki/IB_Physics/Mechanics | 13 |
69 | What does inquiry teaching mean to you?
Which of the following scenarios would best describe your approach in presenting a unit on the gas laws?
At the start of the gas unit, students are given balloons, hot plates, beakers, graduated cylinders, thermometers, pressure probes, syringes, etc. They are instructed to be creative while they record observations and manipulate the items to see what they can discover about gases.
The teacher explains the relationships among temperature, volume, and pressure of a gas. Students then design their own experiments to investigate each relationship, being careful to select and control variables.
Prior to being taught the gas laws, students are provided instructions in order to collect data relating volume and temperature, volume and pressure, and pressure and temperature. Then, by studying and analyzing the data, they begin to discover and construct their own understanding of relationships among the variables.
After presenting the relationships among temperature, volume, and pressure of a gas, the teacher provides students with instructions for collecting gas data. Students perform the experiments and verify that the mathematical relationships among variables are indeed true.
Continuum of Teaching Styles (Traditional to Inquiry)
|Principle Learning Theory||Behaviorism||←
|Student Participation and Role||Passive; Direction Follower||←
|Active; Problem solver|
|Student accountability in outcomes||Decreased||←
|Curriculum Goals||Product Oriented||←
|Guide or facilitator|
Continuum of Lesson Designs: Expository to Guided Inquiry to Open Inquiry
|Traditional Expository Lesson||Guided Inquiry||Open Inquiry|
|Inform: Teacher provides definition(s) and examples of new concept(s)||←
|Engage: Question posed by teachers or a teachers demonstration stimulates students’ affective domain; Often elicits prior knowledge from students||←
Students are curious about a topic after making personal observations and are motivated to continue investigation.
|Verify: Teacher provides lab question and materials for students to confirm the previously-defined concept. Data Analysis should verify concept.||←
|Explore: The teacher has a clear direction about what students should learn and provides students with a question to enable them to collect evidence. Students collect data with materials provided by teacher.||←
|Students formulate a question and then design an experiment to collect evidence.|
|Practice: Teacher provides similar problems and questions to re-enforce content knowledge. Re-teach if necessary.||←
|Explain: Students are guided through meaningful and thought-provoking questions to formulate an explanation from their evidence.||←
|Students independently formulate an explanation after summarizing the evidence.|
|Repeat the Inform-Verify-Practice cycle with a related concept.||←
|Elaborate: Students is guided to possible connections or expansions on the concept. Often a second 5E learning cycle is used to examine connections. May be used to introduce real-world applications.||←
Student independently examines other resources and forms links to other explanations or phenomenon. Often designs and carries out further data collection and analysis.
|Evaluate: Students can provide or recognize concept definitions, examples, and solve algorithmic problems||←
|Evaluate: Students can communicate explanations, compare and contrast with other possible explanations, provide arguments in support||←
|Student conducts a critical analysis of investigations and modifies or expands as necessary.|
Frameworks for Inquiry: Overview of Project
Inquiry is an approach to learning where the learner constructs his own knowledge about how the world works by gathering data through any of the 5 senses and by making meaning of the interrelationships that exist. The learner gathers data to answer a question that is either posed by the learner or by the instructor. Inquiry learning is a process that is cyclical in nature and requires the student to be engaged as an active participant in his own learning process. The inquiry process emphasizes the learner’s process skill development as he interacts with science and the world around him. It is through these process skills that ANY learner at ANY age interacts with the natural and material world. These interactions lead the learner to new discoveries and understandings by forming mental models and frameworks that new knowledge and concepts can attach to, thereby strengthening and enlarging the individual's overall intelligence. The learner will not only gain science content knowledge with this program but will also use his process skills with increasing sophistication and improve “higher-order-thinking-skills” as he interacts with data through hands-on experiences.
High School Chemistry: An Inquiry Approach is specifically a Guided Inquiry approach that has been developed, improved upon and carefully sequenced with a specific goal in mind. This goal is for high school Chemistry students to derive the concepts contained within a two-year course sequence. The curriculum ensures the student success by starting off with a unit on simple measurement. This unit allows students to focus on how data is collected and analyzed so that meaning comes from the collected data. The course work then addresses chemistry concepts through a macroscopic lens and through the gas laws, progressing in a sequenced journey that parallels chemistry’s historical sequence of discovery and building on a student’s mental model of the particulate nature of matter. As the students work through the units, they continually revisit skills and concepts that are integrated into content that is presented later in the course.
Teachers who transform their teaching method and pedagogy with this guided inquiry style will find that they also will grow dramatically from the experience. Instructors will discover a deeper and broader understanding of the basic chemistry concepts and will learn new and better ways of relating course concepts to one another. Instructors will also discover that their guided inquiry approach to teaching has as much an impact on their students’ success in the course as their attitude and content knowledge. It is just as important for an instructor to aid in the development of formal reasoning patterns as it is to pass on course content. The classroom environment that encourages the use of data for conceptual development will cause the student to be more engaged and construct his own learning while increasing his affective domain, improving his energy and attitude toward learning. The student will internalize and integrate the information on his own to a much greater degree. This will transfer to other learning situations and will help improve success in other academic endeavors. The instructor and the learner will grow in content knowledge, higher order thinking skills, process skills and will become lifelong holistic learners.
How can the abstract science of chemistry be taught by Inquiry?
|“High School Chemistry: An Inquiry Approach” Table of Contents|
|Inquiry Title||Traditional Title|
|Unit 1||How are Units of Measurement Related to One Another?||Measurement & Density|
|Unit 2||How are the Pressure, Volume, and Temperature of a Gas Related to One Another?||The Combined Gas Laws|
|Unit 3||Is There a Smallest Piece of Matter or Can We Keep Cutting a Piece in Half Infinitely?||The Atom|
|Unit 4||How Can Matter be Classified According to its Composition?||Classification of Matter|
|Unit 5||How Can Particles be Counted by Weighing?||The Mole|
|Unit 6||What are the Patterns in the Chemical and Physical Properties of Elements?||Periodic Trends|
|Unit 7||What is the System Used to Name Chemical Compounds?||Nomenclature|
|Unit 8||What is the System Used Symbolize Chemical Change?||Equations and Reactions|
|Unit 9||What are the Relationships Among Reactant and Product Quantities in a Chemical Change?||Stoichiometry|
|Unit 10||What are the Relationships Among Reactant and Product Quantities in a Chemical Change involving a Gas?||Gas Stoichiometry|
|Unit 11||What is Specific Heat?||Heat Energy|
|Unit 12||What Model Describes the Structure of the Atom?||Atomic Structure|
|Unit 13||What Joins Atoms Together?||Bonding|
|Unit 14||What are the Properties of Homogeneous Mixtures?||Solutions|
|Unit 15||How do Protons Behave in Chemical Change?||Acids and Bases|
|Unit 16||How do Electrons Behave in Chemical Change?||Oxidation-Reduction|
Sample Lesson Measurement & Density
Comparing Expository (traditional lecture delivery) and our Inquiry Lesson on Density:
|Unit 1||Process Skills|
Traditional Expository Lesson on Density, using the Inform, Verify, and Practice Model
|Inform: Teacher provides definition of density; Density ° mass/volume. Discussion follows with examples||√|
|Verify: Students conduct confirmation lab(s) to find density.||√||√||√|
|Practice: Use algorithmic formula to solve for the mass, volume, or density of different substances.||√|
|Unit 1||Process Skills|
Guided Inquiry Lesson on Density, using the 5-E Learning Cycle (Engage, Explore, Explain, Elaborate, Evaluate)
|Engage: Students reveal prior knowledge about measurement units. “Construct a list of ten units of measurement and explain the relationship among any three units in your list.”||√|
|Explore: Students develop a personal measuring unit (head circumference) and find the relationships between different units of measurements. “Construct a graph of your head measurements vs. accepted lengths in centimeters.”||√||√||√|
|Explain: “Draw a line of best fit and determine the line’s slope. What does the line’s equation tell you? Explain how the relationship between your “head” unit and the centimeter involves proportionality.” Students use the conversion ratio from their graph’s slope and apply dimensional analysis in problem solving||√||√||√||√|
|Elaborate: Measure the mass and volume of different substances and graph. Students find that one substance, like copper, will have a constant slope (density) that is different from another substance’s slope. Students have construct their own definition of density.||√||√||√||√||√||√|
|Evaluate: Using proportional ratios, students solve for the mass, volume, or density in problems.|
Sample Lesson: Unit 7 What is the System Used to Name Chemical Compounds? (Nomenclature)
Comparing Expository (traditional lecture delivery) and our Inquiry Lesson on Acid Nomenclature
|Unit 7||Process Skills|
Traditional Expository Lesson on Acid Nomenclature, using Inform, Verify, and Practice
|Inform: Teacher provides the rules for naming and writing formulas of binary acids and oxyacids.||√|
|Practice: Students do practice drills with names and formulas of acids.|
|Evaluation: Students name and write the formulas for some acids.|
|Unit 7: Lesson Progression||Process Skills|
Guided Inquiry Lesson on Acid Nomenclature, using the 5-E Learning Cycle
|Engage: “Are Scientists memory experts? Are you”? Students try their hand at memorizing the digits of pi, realizing that simple memorization is not going to work for a complex system.||√|
|Explore: Students compare and contrast the names and formulas of a list of binary and oxyacids, and organize by patterns.||√||√|
|Explain: Students construct their own set of rules for naming and writing acid formulas, and then create a flow chart.||√||√||√|
|Elaborate: Students apply their rules to a novel set of acids and adjust as needed||√|
|Evaluate: Students can modify nomenclature flow chart as needed, assign names and formulas to many acids||√||√||√|
Sample Lesson: Unit 13 What Joins Atoms Together? (Bonding)
Comparing Expository (traditional lecture delivery) and our Inquiry Lesson on Bonding
|Unit 13||Process Skills|
Traditional Expository Lesson on Bonding, using Inform, Verify, and Practice
|Inform: Teacher gives definitions, examples, and properties of ionic, covalent, and metallic bonding.|
|Verify: Lab activity observing the different properties of ionic, covalent, and metallic substances||√||√|
|Inform: Teacher defines Lewis Dot Diagrams for ions, covalent, and metallic substances. Teachers also define and give examples of orbital hybridization in various substances. Examine the VSEPR theory.||√|
|Practice: Students make Lewis Dot Diagrams and determine the molecular geometry with sample formulas|
|Evaluation: Students classify substances as having ionic, covalent, or metallic bonds, draw Lewis Dot Diagrams, and determine molecular geometry|
|Unit 13||Process Skills|
Guided Inquiry Lesson on Bonding, using 5-E Learning Cycle with repeating loops
|Engage: Student reveal prior knowledge when asked “What evidence can you provide that supports the idea that atoms bond with other atoms?” Students then examine and group common substances based on their physical appearances and have to justify their classification system to other groups.||√||√|
|Explore 1: Students calculate the difference of electronegativity values and the average of electronegativity in several bonds, graph, and look for patterns and common characteristics.||√||√||√|
|Explain: Students reflect on the nomenclature rules from a previous unit and the substances’ placement on the graph. Students derive a mental model of how electrons are behaving in the different bond groupings as related to electronegativity values.||√||√||√|
|Explore 2: Students investigate the conductivity of materials, and advance their models based on whether electrons are fixed (localized) or mobile (delocalized),||√||√||√||√|
|Explain 2: Students add valence electrons to their models and use the models to define and classify bonds as ionic, covalent, or metallic.||√||√|
|Explore 3: Students construct Lewis Dot diagrams for elements using the electron configurations (Unit 12) and valence electrons. Students look for the relationship to the elements’ placement on the Periodic Table. They then explore the possible bonding in F2, H2O, and NH3, NaCl, CaCl2, and Al2S3. Students watch a 2-minute video on a website to consider metallic bonds. Students must reflect back to the Explore activity to explain the conductivity observations based on bond types.||√||√||√|
|Elaborate 1: Student propose a formula for methane based on the Lewis dot formulas of C and H (usually predicted to be CH2) but then are confronted with combustion values that do not support the prediction. Students have to modify their model to get the correct stoichiometry. Students examine orbital hybridization with SiCl4 and BCl3.||√||√||√|
|Elaborate 2: Students use balloons (representing electron pairs capable of forming bonds around a central atom) to examine the shapes of molecules, like CH4, BeH2, and H2O.||√||√||√|
|Evaluate: Students determine the bonding type by examining electronegativity values and determine bond shapes||√||√|
Who were the...
Pioneers in Science Inquiry Education
Robert Karplus (1927-1990)
At age 32, Robert Karplus left his work as an outstanding theoretical physicist and pioneered the inquiry movement in science education. The work of Jean Piaget showed that children have to progress from concrete thinking to more formal thinking skills. Karplus was revolutionary in applying Jean Piaget’s work to developing new science curriculum. He worked to create curriculum where children construct, or build, their own mental models of science. He felt that effective teachers need to be aware of the reasoning patterns used by children. He developed a three-phase learning cycle, known as Exploration, Invention, and Discovery, emphasizing science study as hands-on experiences. In 1961 he started the Science Curriculum Improvement Study (SCIS) at the Lawrence Hall of Science on the University of California-Berkeley campus. His first education paper with J. M. Atkin in 1962 was entitled “Discovery or Invention?” He also created a film for SCIC in 1969 titled “Don’t tell me, I’ll find out.”
The following is a quote from his article “Science Teaching and the Development of Reasoning” in Journal of Research in Science Teaching, Vol. 14, No. 2, page 367:
“the formation of formal reasoning patterns should be made an important course objective (at least as important as the covering of a certain body of subject matter”
Another pioneer in Inquiry Science Education is Rodger W. Bybee, director emeritus of Biological Sciences Curriculum Study (BSCS). He was executive director of the National Research Council’s Center for Science, Mathematics, and Engineering Education (CSMEE) in Washington, D.C. Between 1986 and 1995, he was associate director of BSCS. He participated in the development of the National Science Education Standards, and from 1993 to 1995 he chaired the content working group of that National Research Council project.
BSCS’s instructional model expanded on Karplus’ three phases of learning. The model, developed in the late 1980’s, has five phases: engage, explore, explain, elaborate, and evaluate.
The BSCS model used a backward design process described by Grant Wiggins and Jay McTighe in Understanding by Design (2005). The educator starts with a clear statement about what students should learn, based on the content standards. Next, the educator needs to determine what will serve as acceptable evidence of student achievement (evaluation stage). Then, a decision is made about what learning experiences (engage and explore) would most effectively develop students’ knowledge and understanding of the targeted content. Further refinement and activities may result (elaborate).
Montana Science Educators
Left to right: Dave Jones, Brett Taylor, Maureen Driscoll, Mark Cracolice,
Tony Favero, Karen Spencer, Paul Phillips
Dave Jonesteaches Chemistry 1 and Chemistry 2 at Big Sky High School in Missoula, Montana. Dave has been teaching for 20 years after earning a Bachelor's of Science degree in Zoology from Idaho State University in Pocatello, ID, his Montana State Teaching Certificate from the University of Montana in Missoula, MT, and his Master's of Science in Chemistry from the University of Montana in Missoula, MT.
Dave’s education awards include the following:
- 2009 Gustav Ohaus Award for Excellence in Science Teaching
- 2009 Big Sky High School Outstanding Faculty Award (selected by colleagues)
- 2007 Toshiba Foundation of America Science Education Grant ($18K) for Air Quality Project
- 2006 National Science Teacher Association Vernier Technology Award
- 2005 Best Buy TEACH Award
- 2005 American Chemical Society Division of Chem. Ed. Northwest Region Teaching Excellence Award
- 2004 NSTA/Toyota TAPESTRY Grant ($10K)
- 2004 Toshiba Foundation of America Science Education Grant ($20K) for Asthma and Air Quality project
Brett Taylor teaches Chemistry and Advanced Science Research at Sentinel High School in Missoula, Montana. He has 30 years of teaching experience. Brett has a BS in Biology and a chemistry minor, and a Master's in Education, both from the University of California-Davis.
Maureen Driscoll teaches Chemistry and Advanced Placement Chemistry at Butte High School in Butte, Montana. She has been teaching for 26 years after getting her Bachelor's degree in Botany from the University of Montana. She earned her Master's of Science in Science Education from Montana State University in 1999. She was awarded the Butte Education Foundation’s Distinguished Educator Award in 2011.
Mark Cracolice, Ph. D., The University of Montana in Missoula, Montana.
Mark is the Chemistry Department chair and instructs General chemistry and graduate courses in chemical education. He has 17 years of experience.
Tony Favero teaches Chemistry, Physics, and Advanced Placement Chemistry at Hamilton High School in Hamilton, MT. Tony has 39 years of teaching experience. He got a Bachelor's degree in Chemistry from Lewis University in Illinois and his Master's degree in Chemistry from the University of Notre Dame. Tony was the 2006 Montana Recipient of the Siemens Award for Excellence in Advanced Placement Teaching in Science and Math. He was also awarded the 2008 Northwest Region American Chemical Society Award for Excellence in Teaching High School Chemistry
Karen Spencer earned a Bachelor of Science in Chemistry and a Bachelor of Arts in Spanish from Montana State University. She earned a Master of Arts in Chemistry from Washington State University. She has been teaching General Chemistry, Honors Chemistry, Advanced Chemistry, and Organic Chemistry at C. M. Russell High School in Great Falls, Montana for the last 33 years. Karen has received the following awards:
- American Chemical Society Northwest/Rocky Mountain Regional Award in High School Chemistry Teaching- 2000
- Montana Science Teacher Association Chemistry Teacher of the Year - 1997
- DuFresne Outstanding Educator Award- 2000
- National Honor Society Doctor of Service Award- 2003
- CMR High School Teacher of the Year- 1998
Paul Phillips teaches Chemistry I & II, and Physics at Capital High School in Helena, Montana. Paul has 22 years of teaching experience after receiving his Bachelor’s degree from Montana State University in Science. Paul has received the following teaching awards:
- American Chemical Society Northwest Region Chemistry Teacher of the Year- 2009
- Helena Education Foundation’s Distinguished Educator Award (twice)
- Helena Education Foundation’s Great Conversations about Great Teachers Award
- Capital High School National Honor Society’s Most Inspirational Teacher Award (three times)
- Who's Who of America's Teachers (three times)
What do the project data and results tell us about the effectiveness of inquiry teaching?
Do students get the chemistry content when using the High School Chemistry: An Inquiry Approach curriculum?
Our data suggests that they do. The American Chemical Society (ACS) California Chemistry Diagnostic Exam is a 44 question multiple-choice test designed to assess students’ chemistry content knowledge. Nine years of data have been collected. The first two years show data before the project began. The next three years indicate years in which the curriculum was used in pieces. The last four years indicate data for which the course was taught in its entirety using the High School Chemistry: An Inquiry Approach curriculum. For all nine years the average score was above the national average of 22, which suggests that the students are learning as much chemistry content in the High School Chemistry: An Inquiry Approach curriculum course as they were previously learning. The chart below displays these data.
Do students' thinking skills improve when their teachers use the High School Chemistry: An Inquiry Approach curriculum?
The data suggests they do. We pre- and post-tested our students using the Lawson Classroom Test of Science Reasoning (CTSR) and found significant gains in reasoning ability in groups that were exposed to the High School Chemistry: An Inquiry Approach curriculum. During the 2009-2010 school year we did a comparison study. Two teachers are long-term members of the Frameworks for Inquiry project. Both exclusively use the High School Chemistry: An Inquiry Approach curriculum materials developed by the project for their first-year chemistry class and both teach in Missoula high schools. The third teacher (the control) taught first-year chemistry for 25 years and does not use the High School Chemistry: An Inquiry Approach curriculum materials. All three groups of students in these classrooms took an online version of the CTSR in the fall of 2009 (pretest), and again in the spring 2010 (posttest).
The results of the Pre/Post testing are remarkable. Students in the High School Chemistry: An Inquiry Approach curriculum groups (N=57), (N=47), made an average 1.92 and 1.20 point gains respectively. This represents 12.8% and 8.1% increases in their average CTSR scores. In contrast the control (N=35) group made an average gain of 0.37 points representing a 2.5% increase. The average normalized gain (ANG)–a ratio of the percent gain to the maximum possible percent gain–was remarkably different also. The High School Chemistry: An Inquiry Approach curriculum groups ANG results were .519 and .298, respectively, indicating a medium effect on the students’ science reasoning skills. The control group ANG of .0657 indicates the course had no effect on students’ science reasoning skills. The results are summarized in the chart below.
During the 2010-11 school year we did another study involving 3 teachers. Again two of the teachers are long term members of the Frameworks for Inquiry project, and the third is not. All three teachers used the High School Chemistry: An Inquiry Approach curriculum exclusively for their first year chemistry classes. Again, all three groups of students took an online version of the CTSR in the fall of 2010 (pretest), and again in the spring of 2011 (posttest). Students in the three groups (N=79), (N=100), and (N=19) made an average 2.03, 1.11, and 1.80 point gains respectively. This represents a 22.9%, 12.4%, and 19.3% increases in their average CTSR scores. The ANG results were .330, .182, and .319 indicating that the curriculum had a medium affect on the students’ science reasoning skills. These results are significant because the N=19 group was taught by a non-project teacher who had very similar results in reasoning skills gains. The results are also summarized in the chart below.
The main point regarding this data is that curriculum materials developed as inquiry-based can have a profound effect on students’ science reasoning skills.
|Thinking Skills Gains Measured using the Classroom Test of Scientific Reasoning (CTSR)|
|Teacher||Project Teacher A||Project Teacher B||Non-project Teacher C||Project Teacher A||Project Teacher B||Non-project Teacher D|
|Control vs Treatment||Treatment||Treatment||Control||Treatment||Treatment||Treatment|
|CTSR Pretest mean (15 items)||11.3||11.0||9.3||8.85||8.92||9.35|
|CTSR Posttest mean (15 items)||13.3||12.2||9.7||10.87||10.03||11.15|
|Average normalized gain||0.519||0.298||0.0656||0.330||0.182||0.319|
Data from the first year of implementation of the project supports that even a small infusion of inquiry- based learning can impact student thinking skills. A preliminary study used Honors Chemistry students as the control group. Students in Honors Chemistry have stronger math skills than General Chemistry students. The treatment group, General Chemistry students, was given early versions of the first six units of High School Chemistry: An Inquiry Approach curriculum as part of their studies. The treatment group showed higher ANG than the control, despite their lower math abilities.
|Classroom Test of Scientific Reasoning (CTSR)
Same School / Same Teacher
|Control (Honors Chem)||Treatment (Gen Chem)|
|CTSR Pretest mean (13 items)||7.56||5.86|
|CTSR Posttest mean (13 items)||8.02||7.14|
|Average Normalized Gain (ANG)||0.085||0.179| | http://opi.mt.gov/Curriculum/MSP/BIGSKY/index.html?tpm=1_5 | 13 |
55 | Parametric Equation of a Circle
A circle can be defined as the locus of all points that satisfy the equations
x = r cos(t) y = r sin(t)
where x,y are the coordinates of any point on the circle, r is the radius of the
is the parameter - the angle
by the point at the circle's center.
Coordinates of a point on a circle
Looking at the figure above, point P is on the circle at a fixed distance r (the radius) from the center.
The point P
an angle t to the positive x-axis. Click 'reset' and note this angle initially has a measure of 40°.
Using trigonometry, we can find the coordinates of P from the right triangle shown. In this triangle the radius r is the
The x coordinate is therefore r cos(t) and the y coordinate is r sin(t)
To see why this is, recall that in a right triangle, the
sine of an angle
is the opposite side divided by the hypotenuse.
In the figure on the right
In the applet above, the side opposite t has a length of y, the y coordinate of P. The hypotenuse is the radius r. Therefore
Multiply both sides by r
By similar means we find that
The parametric equation of a circle
From the above we can find the coordinates of any point on the circle if we know the radius and the subtended angle.
So in general we can say that a circle centered at the origin, with radius r, is the locus of all points that satisfy the equations
x = r cos(t)
for all values of t
y = r sin(t)
It also follows that any point not on the circle does not satisfy this pair of equations.
If we have a circle of radius 20 with its center at the origin, the circle can be described by the pair of equations
x = 20 cos(t)
y = 20 sin(t)
What if the circle center is not at the origin?
Then we just add or subtract fixed amounts to the x and y coordinates. If we let h and k be the coordinates of the center of the circle,
we simply add them to the x and y coordinates in the equations, which then become:
x = h + r cos(t)
This is really just translating ("moving") the circle from the origin to its proper location.
In the figure above, drag the center point C to see this.
y = k + r sin(t)
What does 'parametric' mean?
In the above equations, the angle t (theta) is called a 'parameter'. This is a variable that appears in a system of equations that can take on any value (unless limited explicitly) but has the same value everywhere it appears. A parameter values are not plotted on an axis.
Algorithm for drawing circles
This form of defining a circle is very useful in computer algorithms that draw circles and ellipses.
In fact, all the circles and ellipses in the applets on this site are drawn using this equation form.
For more on this see An Algorithm for Drawing Circles.
Other forms of the equation
to solve the triangle in the figure above we get the more common form of the equation of a circle
For more see Basic equation of a circle
and General equation of a circle.
To demonstrate that these forms are equivalent, consider the figure on the right. In the right triangle,
we can see that
Recall the trig identity
Substitute x/r and y/r into the identity:
Remove the parentheses:
Multiply through by r2
Things to try
- In the above applet click 'reset', and 'hide details'. Uncheck 'freeze radius'.
- Drag P and C to make a new circle at a new center location.
- Write the equations of the circle in parametric form
- Click "show details" to check your answers.
Use the links below for more information on topics related to this page.
(C) 2009 Copyright Math Open Reference. All rights reserved | http://www.mathopenref.com/coordparamcircle.html | 13 |
67 | Excel if function is a logical function of MS excel which is used to fetch one of the two values based on a condition you specify. If the specified condition is met, it will return one value, otherwise it will return another value. If functions are one of the widely used and versatile formulas of MS excel and other spreadsheet programs.
Although Excel if function looks simple and straightforward, it is quite powerful and can save a lot of your time if used properly at the right place. In this article, I will start from the basics and go to the advanced level with nested if function, if formula in arrays, if function with ‘AND’ and ‘OR” and if function combined with vlookup. We will discuss it with real-life examples which will help you to understand it better.
Let’s start with a simple example. Say you have a list of your customers with gender and you want to give away the holiday gifts to all of them. You have separate gifts for men and women – pen sets for men and purse for ladies. If you use If function, in no time you can get the number of pen sets and ladies purse required. Here, If function will look at a simple condition “Is the person male”? If the answer is “TRUE”, it will return pen set otherwise purse.
Excel If function Syntax
IF(logical_test, [value_if_true], [value_if_false])
To make your life simple in understanding this syntax, the easier way to interpret it is as follows:
IF(Check something, If true do this, Otherwise do that)
For our example, it would be:
IF(Check if the person is male, Return pen set, Return purse)
Explanation of If function formula arguments
- logical_test: An expression or value which is tested to see whether it is true or false. This field is required. Suppose we want to determine if the given number in cell A2 is negative, the expression will be A2<0. The syntax for determining whether the value is positive or negative should look like this:
You can use any of the following comparison operators in the expression for logical_test argument:
|= (equal to)||A1=B1|
|> (greater than)||A1>B1|
|< (less than)||A1<B1|
|>= (greater than or equal to)||A1>=B1|
|<= (less than or equal to)||A1<=B1|
|<> (not equal to)||A1<>B1|
- value_if_true: The value which you want to display, if the logical_test is true. This field is optional. In this example, if the cell value is -5 and we want to display word ‘negative’, we need to put “Negative” in this argument value. Note here that the text is enclosed in colons. This is required for strings used in excel formulae. Since this argument is optional, you can omit it by putting just a comma after logical_test argument like this:
In this case, if the logical_test is true, if function will return 0 (zero).
- Value_if_false: The value which you want to display, if the logical_test is False. Similar to value_if_true argument, this field is optional. Here also, you can omit the argument by putting just a comma after value_if_true argument. As you may guess, if function will return a 0 (zero) in that case.
How to use Excel If function
This paragraph is for starters who are not comfortable with using excel functions. If you know how to use basic functions like SUM, AVERAGE, etc.; you can skip it.
Suppose, as a shop owner, we are interested to give holiday discounts based on the total order value. If the value is less than $100, the discount is 8%. For $100 and above, the discount is 12%. Based on the value entered in a cell, we will use IF function to determine the discount percentage.
- Open a new excel workbook.
- In cell B4, we will enter the total order value. Let’s enter 80 here at this time.
- We want to display the percentage discount in cell C4. Click on cell C4 to select it.
- Click on “insert item” icon just before the formula bar above worksheet to open the function dialog box.
- You can also find this “insert item” icon in the Formulas tab.
- Select the category “logical”.
- Select “IF” from list of available functions and click OK.
- The function argument dialog box will appear which will ask you to enter the values for the three arguments i.e. logical_test, value_if_true and value_if_false.
- Select the logical_test text box and click on cell B4, then type < (less than sign) followed by 100.
- In the value_if_true argument, enter 8.
- In the value_if_false argument, enter 12.
- Click on OK.
- Since in this case 80 is less than 100, 8 should appear in cell C4.
- Change the value in cell B4 to 110, the value in cell C4 will change to 12.
- If you wish to see the IF formula used, click on cell C4 and see the formula bar. You will find the following formula here: =IF(B4<100,8,12)
Example of omitted argument
Let’s take the same example as above. The only change we are considering here is that if the total order value is less than $100, there will be no discount. So, for the order value less than $100, 0 should appear in the cell C4.
To achieve this functionality, we just need to omit the value_if_true argument. Now, the formula will change as follows:
Try putting 70 in the B4 cell, the value displayed in C4 cell will be 0 (Zero). If you don’t want 0 to appear in the cell; put “” (empty text) in the value_if_true argument like this.
Nested If Functions
Now, as you have fully understood the basics, let’s move on to some advanced topics. Simple IF function is good if you have only one logical test to perform. However, where multiple logical tests are required, we can use nested IF function.
To understand nested IF functions, let’s create an income tax calculator. Suppose, income tax is calculated based on the total income as per following table:
|Total Income||Income Tax (Percentage of Income)|
|Less than Rs. 300000||5|
|Rs. 300000 – Rs. 600000||10|
|More than Rs. 600000||20|
So, how do you determine the income tax percentage? If you think like me, you will first determine if the total income is less than Rs. 300000. If yes, the income tax percentage is 5; otherwise again you see if the total income is less than Rs. 600000. If yes, the income tax percentage is 10; otherwise it is 20, Right?
Great..! Nested IF function works in the similar fashion.
We will use the same worksheet, enter our total income in cell B4 and get the income tax percentage value in cell C4.
First let’s test, if total income is less than Rs. 300000 or not? If True, return 5; otherwise return “Yet to determine”. In cell C4, type the following formula (or you can use function dialog box).
=IF(B4<300000,5,”Yet to determine “)
Now, in place of “Yet to determine”, we will place another IF function to test if the income is less than Rs. 600000? If True, return 10; otherwise return 20.
So, we replace “Yet to determine” with the following formula: IF(B4<600000,10,20)
Our final formula for calculating income tax is a nested IF functions as follows:
Perfect. Now, test it with a few values to see what percentage it returns.
Syntax for nested IF function
IF(logical_test1, [value_if_true1], IF( logical_test2, [value_if_true2], [value_if_false2]))
You can nest up to 64 IF functions as value_if_true and value_if_false arguments as per your requirement.
Let’s increase the tax slab in our example and use multiple Nested IF functions.
|Total Income||Income Tax (Percentage of Income)|
|Less than Rs. 300000||5|
|Rs. 300000 – Rs. 600000||10|
|Rs. 600000 – Rs. 1000000||20|
|More than Rs. 1000000||30|
This time, we will straightway determine the income tax value (not percentage). Try yourself fist and then match with the following formula.
=IF(B4<300000, B4*0.05, IF(B4<600000, B4*0.1, IF(B4<1000000,B4*0.2,B4*0.3)))
IF function with Boolean (AND/OR) function
Combining excel Boolean functions like ‘AND’ and ‘OR’ makes IF function more powerful. Suppose you need to check more than 2 conditions to see if both are true, you can very well use nested IF function. But a better way to do this is to use IF function combined with ‘AND’ function.
How does AND function work?
AND function will return TRUE only if all the parameters are true.
=AND(10<20, 20<30, 30<50) will return TRUE
=AND(10>20, 20<30, 30<50) will return FALSE
IF Function combined with AND Function
In the previous example, let’s assume that for senior citizens (above 60 years old) with total income less than Rs. 500000, there will be no income tax; otherwise it will be 15% flat.
We will enter the total income in cell B4 and age of the person in cell B5 and get the percentage in cell C4. So, the formula in cell C4 is:
=IF(AND(B4<500000, B5>60), 0, 15)
Simple and straightforward, isn’t it?
IF Function combined with OR Function
OR function is equally powerful when used with IF function. As you can understand; OR function will return TRUE if any one or more of the parameters are true.
So, the following formula
=OR(10>20, 20<30, 30<50) will return TRUE
Suppose, in a bus reservation, there is a discount of 20% for kids (less than 10 years) and senior citizens (above 60 years); the formula can be:
=IF(OR(B4<10, B4>50), 20, 0)
The OR function in formula will look into the cell B4 to determine if it less than 10 OR greater than 60. If any of the two is true, it will return TRUE. IF function in turn, will return 20. Pretty simple!
IF function in array
If you pass an array in any of the arguments in IF function, each element of the array is evaluated while executing the formula.
There is a group of functions COUNTIF, AVERAGEIF and SUMIF where the function first checks the array or range that meets the specific criteria and then do calculations like count, average or sum.
Suppose we want to get the average number of sales for 10 days excluding holidays where the sales number is 0. Here, we will use AVERAGEIF function.
The syntax for this group of functions is as follows:
range: Required field. One or more cells to count, average or sum, including numbers or names, arrays, or references that contain numbers. Blank and text values are not taken into account.
criteria: Required field. A number, expression, cell reference, or text string that defines which cells to be taken into account. For example, criteria can be expressed as 20, “>20″, B4, “books”, etc.
VLOOKUP and IF Function
As you know now, for testing multiple conditions, we can use nested IF functions. However, if the number of conditions is huge, we should better use VLOOKUP function. VLOOKUP is much more powerful and can be very handy in complex conditions. Primary and basic function of VLOOKUP is to search any value from a table with data and return a value from a different column from the same row. The VLOOKUP function is a little complex and I have a written a separate article on excel VLOOKUP.
You may like to go through my Excel VLOOKUP Tutorial.
I hope you find this tutorial on Excel IF function useful. If you have any question, suggestion or feedback, please feel free to write in the comments section below. | http://www.webtutorialplus.com/excel-if-function/ | 13 |
88 | This is part seven of the Stata for Researchers series. For a list of topics covered by this series, see the Introduction. If you're new to Stata we highly recommend reading the articles in order.
Combining two data sets is a common data management task, and one that's very easy to carry out. However, it's also very easy to get wrong. Before combining data sets be sure you understand the structure of both data sets and the logic of the way you're combining them. Otherwise you can end up with a data set that you think is ready for analysis, but is really utter nonsense. Stata tries to make sure you've thought through what you're doing, but can't tell you what makes sense and what doesn't.
Stata always works with one data set at a time, so you will always be combining the data set in memory (the master data set) with another data set on disk (called the using data set, for reasons that will be clear when you see the syntax).
Stata calls it appending when you add the observations from the using data set to the master data set. Appending makes sense when the observations in both data sets represent the same kind of thing, but not the same things. For example, you might append a data set of people from Wisconsin to a data set of people from Illinois. Both data sets should have the same (or close to the same) variables, with the same names. If a variable only appears in one data set, observations from the other data set will be given missing values for that variable.
The syntax is to carry out an append is simple: load the the master data set and then type:
append using dataset
where dataset is the name of the data set you want to append.
Stata calls it merging when observations from the two data sets are combined. There are, in theory, four kinds of merges:
In a one-to-one merge, one observation from the master data set is combined with one observation from the using data set. A one-to-one merge makes sense when the observations in both data sets describe the same things, but have different information about them. For example, you might merge the answers people gave in wave one of a survey with the answers the same people gave in wave two of the survey.
In a one-to-many or many-to-one merge, one observation from one data set is combined with many observations from the other (the difference between one-to-many and many-to-one being whether the master data set has the "many" or the using data set). These merges make sense when you have hierarchical data, and one data set contains information about the level one units while the other contains information about the level two units. For example, you might merge information about households with information about the individuals who live in those households.
In principle there are also many-to-many merges. In practice they are rarely if ever useful. Fortunately Stata will no longer let you do one by mistake.
In all the merges we'll discuss, Stata combines observations that have the same value of a key variable or variables, typically an ID. In a one-to-many or many-to-one merge, it is the identifier for the level two units that is the key variable (e.g. household ID, not individual ID). It's very important that the key variable have the same format in both data sets.
If an observation in one data set does not match with an observation in the other, it will be given missing values for the variables from the other data set. Since the viability of a research project often depends on how many observations actually merge (e.g. how many people from wave one of the survey could be found in wave two?) Stata gives you tools for figuring out how many observations actually merged and for examining those that didn't.
If a variable exists in both data sets, the values from the master data set will be kept and the values from the using data set will be discarded. Occasionally this is what you want, but it's more likely to be an error. In general you should set up your data such that the only variables the files to be merged have in common are the key variables.
The syntax for a merge is:
merge type keyvars using dataset
The type must be 1:1 (one-to-one), 1:m (one-to many), m:1 (many-to-one) or m:m (many to many); keyvars is the key variable or variables; and dataset is the name of the data set you want to merge. Stata can figure out what type of merge you're doing by looking at the data sets and key variables, but as of Stata 11 you must specify what kind you think you're doing so Stata can stop you if you're wrong.
The examples include several files containing fictional student information from 2007. scores.dta contains the students' scores on a standardized test, demographics.dta contains demographic information about them, and teachers.dta contains information on their teachers. Take a moment to look at each file, then load the test scores:
use scores, replace
In this data set, each observation represents a student. browse and you'll see that you have a student ID (id), a teacher ID (teacher) and a score for each.
Your first task is to add in the demographic information. In demographics.dta each observation also represents a student, with the variables being id and race. Thus this is a job for a one-to-one merge and the key variable is id.
merge 1:1 id using demographics
Stata will report that all 60 observations matched. It will also create a variable called _merge. A one in _merge means an observation only came from the master data set; a two means it only came from the using data set; and a three means an observation successfully matched and thus came from both. In this case we see that all observations matched and thus have _merge equal to three, so there's no need to keep the variable. In fact we need to drop it (or rename it) before doing any further merges:
Next add information about teachers. In teachers.dta each observation represents a teacher, and each teacher has many students. That makes this a many-to-one merge (since the many students are currently in memory and the one teacher is in the using data set). The key variable is not id, since that refers to the students, but teacher:
merge m:1 teacher using teachers
Again, all 60 observations merged properly, so you can drop _merge.
Now suppose you were tracking these students for multiple years. The data set panel2007.dta contains a simplified version of this data set: just id and score. The data set panel2008.dta has the same variables for a different year. How would you combine them?
The proper way to combine them depends on what data structure you want. This is hierarchical data where a level two unit is a student and a level one unit is a student's data for a particular year. Thus it can be represented in wide form (one observation per student), or in long form (one observation per student per year).
To put the data in long form simply stack the two data sets using append. However, you'll need to know which year an observation represents. To do that, add a year variable to each data set, with the value 2007 for the 2007 data and the value 2008 for the 2008 data. You can do so with the following code:
append using panel2007_append
To put the data in wide form, do a one-to-one merge with id as the key variable. But first you need to change the variable names. Recall that in wide form, it is the variable names that tell you which level one unit you're talking about. So instead of score, you'll need score2007 and score2008.
ren score score2007
ren score score2008
merge 1:1 id using panel2007_merge
This time you'll see that one observation does not match. You can see which one by typing:
l if _merge==2
Student number 55 was not in panel2008 and thus couldn't be matched. As a result we have no idea what his or her test score was in 2008. Unfortunately this is very common.
If your entire research agenda depends on having both test scores, you may need to drop observations that don't exist in both data sets. You can do so at this point by typing:
drop if _merge!=3
You can also specify which observations should be kept directly in the merge command:
merge 1:1 id using panel2007_merge, keep(match)
keep(match) means only keep observations which match. The alternatives are master and using, and you can list more than one. For example, to keep observations which match and observations that only come from the master data set, while throwing away observations that only come from the using data set, you'd say keep(master match).
Merges will uncover all sorts of problems with your data set (and if they're not fixed merging will introduce new ones). Here are a few common ones:
While Stata will happily match different kinds of numbers (ints and floats, for example) it can't match numbers and strings. IDs can be stored as either (as long as you choose a numeric type that has enough precision--see Working with Data) and it's not uncommon to find that your data sets store the ID in different ways. In that case it's usually best to convert the numbers to strings:
rename idString id
The string() function takes a number and converts it to a string. You can give it a second argument containing the format in which the number should be "written" if needed.
Duplicate IDs will turn what should be a one-to-one merge into some other kind--quite likely one that doesn't make sense.
One possibility is that you simply misunderstood the data sets. If you think you're merging household data and it turns out that one file actually contains individuals, then duplicate household IDs in that file do not indicate a problem. Just be glad the error message brought the true structure of the file to your attention.
Another source of duplicates is round-off error due to saving the IDs in an inappropriate variable type.
But the most common reason for duplicate IDs is errors in the data. These will have to be resolved in some way before merging. You can see how many problems you have with the duplicates command:
duplicates report id
This will tell you how many observations have the same value of id. For further examination, you can create a variable that tells you how many copies you have:
bysort id: gen copies=_N
Then you can look at just the problem observations with:
browse if copies>1
If it turns out that in the copies the entire observation is duplicated (e.g. people are in the data set more than once for some reason) you can delete the extra observations with:
duplicates drop id, force
(the force option reminds you you're about to change your data). However, if the observations are different (e.g. you've got different people with the same ID) and if merging is vital to your research agenda, you may need to drop all observations with duplicate IDs. (If file one has two person ones, which should be merged with person one in file two?) You can do so with:
drop if copies>1
For purposes of merging, missing values are treated just like any other value. If you've got observations with missing IDs you'll probably have to drop them.
Previous: Hierarchical Data
Last Revised: 8/22/2011 | http://www.ssc.wisc.edu/sscc/pubs/sfr-combine.htm | 13 |
98 | Heat & Thermodynamics (Thermometry)
Scales of Temperature:
There are three types of scale of temperature which are as given here...
The Celsius Scale:
This scale was devised by Anders Celsius in the year 1710. The interval between the lower fixed point and the upper fixed point is divided into 100 equal parts. Each division of the scale is called one degree centigrade or one degree Celsius (1°C). At normal pressure, the melting point of ice is 0°C. This is the lower fixed point of the Celsius scale. At normal pressure, the boiling point of water is 100°C. This is the upper fixed point of the Celsius scale.
The Fahrenheit Scale:
This scale was devised by Gabriel Fahrenheit in the year 1717. The interval into 180 equal parts. Each division of this scale is called one degree Fahrenheit (1°F). On this scale, the melting point of ice at normal pressure is 32°F. This is the lower fixed point. The boiling point of water at normal pressure is taken as 212°F. This is the upper fixed point.
.The Reaumer Scale:
This scale was devised by R.A. Reaumer in the year 1730.
The interval between the lower and the upper fixed points is divided into 80 equal parts. Each division is called one degree Reaumer (1°R). On this scale, the melting point of ice at normal pressure is 0°R. This is lower fixed point. The boiling point of water at normal pressure is 80°R. This is the upper fixed point.
.Conversion of Temperature:
In order to convert temperature from one scale to another, following relation is used.
.Constant Volume Gas Thermometer:
Suppose the pressure of the gas is p0when the bulb is placed in melting ice (ice point) and it is p100
when the bulb is placed in a steam bath (steam point). We assign 0°C to the temperature of the ice point and 100°C to the steam point. The temperature t corresponding to a pressure p of the gas is defined by
.Constant Pressure Thermometer
Volume of the bulb = V
Volume of the mercury taken out = v'
Temperature of the ice bath = T0
Temperature of the heat bath = T
Platinum resistance Thermometer:
Resistance at the temperature t = Rt
Resistance at the ice point = R0
Resistance at the steam point = R100
Electrical resistance of a metal wire increases gradually and uniformly over a fairly wide range of temperature has been made use of in electrical resistance thermometers. The variation of the resistance thermometers. The variation of the resistance of a metal wire with temperature may be represented by the following approximate relation.
Rt= R0(1 + at) Here, Rtis the resistance at t°C, R0
is the resistance at 0°C and a is the temperature coefficient of resistance. The value of a depends upon the nature of material of the wire.
. Comparison of Different Thermometers:
i.) Platinum Resistance:
can be used between - 180ºC to 1150ºC, Accurate and has a wide range, not suitable for varying temperatures, best thermometer for small steady temperature difference, used as standard between - 183ºC and 630°C.
can be used between - 250°C to 1150°C, temperature is measured in terms of Emf. (electro motive force) between the junction of different metals at different temperatures, fast response because of low heat capacity, has a wide range, can be used for remote reading using long leads, accuracy is lost if emf is measured using a moving coil volt meter, best thermometer for varying temperatures, can be made direct reading by calibrating galvanometer, used as standard between 630°C and 1063°C.
iii.) Radiation Pyrometer:
used for a temperature above 1000°C, colour of radiation emitted by a hot body is used as the property, does not come into contact when temperature is measured, it is cumbersome, a does not give direct reading, it can be used only for high temperatures, used as standard above 1063°C.
Heat is a form of energy which flows between two bodies due to difference in their temperatures.
Heat is the cause and temperature is one of the several effects.
Heat energy is transient form of energy. An isolated body at a temperature will not posses heat energy but possess internal energy.
Heat is a scalar. Units of Heat
a.) S I unit is joule. b) practical unit of heat in C.G.S. is 1 calorie 1 cal = 4.186 J
Dimensional formula : ML2T-2.
Calorie : It is the amount of heat required to raise the temperature of 1 gm of water through 10C.
Mean calorie (or) standard calorie: The amount of heat required to raise the temperature of 1 gm of water from 14.50C to 15.50C.
Specific heat : The amount of heat required to raise the temperature of unit mass of a substance to raise its temperature through 10C is called its specific heat (s).
specific heat S =
Dimensional formula is M°L2T-2K-1S I unit is J kg-1K-1
CGS unit is cal gm-1°C-1
a.) Specific heat for solids and liquids depends upon nature of substance and does not depend upon mass, volume and heat given to the substance. (It is constant for given substance)
b.) Specific heat for gases depends upon degree of freedom, type of process.
c.) Specific heat slightly increases with increase of temperature.
d.) Among solids, liquids and gases, specific heat is maximum for gas.
Of all gases, H2
has highest specific heat.
e.) In liquids specific heat is minimum for mercury. Hence it is used as thermometric liquid.
f.) Specific heat is known as thermal inertia of the body.
The body having higher specific heat cools very slowly and gets heated up very slowly. The body having lower specific heat cools very fast and gets heated up very fast.
i.) Among liquids specific heat is maximum for water. Due to it, water absorbs (or) gives out more heat than any other substance for the same change in temperature. Hence it is used in radiators and hot water bags.
. Latent heat :
The amount of heat energy either absorbed or liberated by unit mass of substance when it undergoes change of state at constant temperature is called Latent heat (or) heat of formation (L).
Cal/gam , K.cal/kg , J/kg.
During the change of state, heat energy supplied is used up in increasing the distance between the molecules, i.e. to increase the P.E. of molecule without increasing the KE and hence the temperature does not change.
The quantity of heat, taken in (or) given out by unit mass of a substance when it changes from solid to liquid state (or) liquid to solid state at constant temperature is known as latent heat of fusion.
Latent heat of fusion of ice = 80 cal/gm = 0.336 x 106J/kg
The quantity of heat taken in (or) given out by unit mass of substance when it changes from liquid to vapour state (or) vapour state to liquid state at constant temperature is known as latent heat of vaporization.
Latent heat of vaporization of water (or) latent heat of steam = 540 cal/gm =
Variation of Boiling Point with pressure:
Boiling point increases with increase of pressure..
I n a pressure cooker, cooking of rice is a quick process as water boils at a temperature above 100o
C due to increase in pressure..
Due to addition of impurities Boiling point of liquid increases..
The heating of liquid above its boiling point is called super heating and cooling of liquid below its freezing point is called super cooling. It is unstable.
Variation of melting point with Pressure:
Melting point varies with the pressure from classius - clayperon equation
The melting point of the substances which expands on melting increases with increase of pressure
Wax, Glass, Gold, Copper, Silver.
The melting point of the substances which contract on melting decreases with increase of pressure.
Ice, Cast Iron, Bismuth, Type Metal.
Addition of impurities decreases with melting point.
Triple point of Water:
The temperature and pressure at which the three states of matter coexists is known as triple point.
Triple point and phase diagram for water
OA - Ice line, OB - steam line, OC- sublimation curve
a.) Triple of water is 273.16k (0.01°C) at a pressure of 610.42pa (4.6 mm of Hg).
b.) Change of state from solid to directly vapour is called sublimation. eg. iodine, camphor etc.
c.) Change of state from vapour directly to solid is called Hoar Frost.
d.) If pressure at triple point is increased then water exists in liquid form. If pressure is decreased, it converts to steam.
Ice at 0°C is much cooler than the water at 0°C.
(1) Steam at 100°C produces more severe burns than water at 100°C.(2) It is not possible to freeze water using ice alone at 0°C.(3) It is not possible to boil water using steam alone at 100°C.
Kinetic Theory of Gases and Thermodynamics:
Internal energy : (U)
a.) U = PE + KE of molecules.
b.) KE of molecule =
where f = degree of freedom, K is Boltzmann constant.
c.) PE of molecules depends upon intermolecular distance (ro).
d.) when ro distances increases or decreases, PE increases.
e.) In case of ideal gases PE of molecules = 0. There by U depends on KE of gas molecules in turn depends on temperature.
f.) In case of real gases U depends upon temperature and volume. Hence is absolute U can not be determined.
g.) For gaseous sample, for any process* dU = change in internal energy = nCvdT* dU purely depends upon temperature.
h.) Two ways of increasing internal energy. By transferring heat and By performing work.
i.) In the process of change of state, internal energy of system increases.
j.) Internal energy is independent of pressure and volume.
k.) The inter energy of water molecules at 0°C is greater than that of ice molecules at 0°C
l.) When unit mass of a substance expands from a volume of V1 to volume V2
at constant temperature, then the change in internal energy is
dU = [L - P (V2- V1)]
m. For an Ideal gas, Change in internal energy dU = 0 for
1. Cyclic process
2. Isothermal process
n. If two systems are at the same temperature, they are said to be in thermal equilibrium.
Zeroth law of thermodynamic :
(1) If the bodies are in thermal equilibrium with a third body separately, then those two bodies are in thermally in equilibrium with each other.
(2) Zeroth law of thermodynamics corresponds to
Existence of temperature of the body
(3) If volume of a system increases in a process then work is done by the system which is +ve. If the volume of a system decreases in a process, then work is done on the system which is - ve.
(4) The amount of work done by a system as it expands (or) contracts is given by
(5) External work done during an Iso choric process is zero
(6) Internal work done : Work done by a part of gas on other part of gas
(7) P - V graph is called indicator diagram. Whose area gives work done.
(8) In a cyclic process work done is +ve if the cycle is clockwise and - ve if the cycle is anti clockwise.
I law of Thermodynamics :
The amount of heat supplied to a system (dQ) capable of doing external work is equal to sum of increase in the internal energy (dU) and external work done by system (dW)
dQ = dU + dW.
Sign convention :
1. d Q = +ve when heat supplied = - ve when heat rejected dU = +ve when temperature increases = -ve when temperature decreases dW = +ve when work is done by system. = - ve when work is done on system.
2. It is a special case of law of conservation of energy.
3. dQ = dU + Pdv
m Cp dT = m Cv dT + m R dT n Cp dT = n Cv dT + n R dT
It in another form of law of conservation of energy.
This law does not indicate the direction of heat flow.
. Specific heats of gases :
1. Amount of Heat required to raise the temperature of whole system by 1°C is Heat capacity of a gas (C)
2. Cv molar specific heat of gas at constant volume.
specific heat of a gas as constant volume.
Cp molar specific heat of gas as constant pressure.
specific heat of gas at constant pressure
units of Cp and Cvare J/mole/k units of Cp and Cvare J/kg/k
3. Relation between molar specific heat and ordinary specific heat is
C = M C (M = Molecular weight of the gas)
4. Relation between Specific heats of the gas
a) Cp- Cv= r =Cp, Cv
Specific heats per unit mass of gas
b) Cp- CV= R
R - universal gas constant.
5. Ratio of specific heats => 1
6. Value of depends on atomicity of gas
7. As atomicity of gas increases, value of decreases
For MAG -=
For DAG -
For TAG -
8. Cp & Cv in terms of
9. Number of degrees of freedom does a gas posses is
10. Based on degree of freedom
11. Fraction of the initial energy supplied that is utilised to
increase the internal energy is
12. Fraction of the initial energy supplied that is utilised to
do the external work is
. Isothermal Process:
1) Isothermal change is that change of pressure and volume when the temperature of the system remains constant.
2) It is a slow process.
3) It should be conducted in good thermally conducted vessel.
4) It follows Boyle's law. P1V1= P2V2
5) During isothermal change, internal energy remains constant. U is constant
Ex. Melting of ice, Boiling of a liquid are isothermal changes.
6) Work done in an isothermal change is given by W = 2.303 nRT log10
= 2.303 nRT log10
7) Work done depends on
a) Number of moles
c) Expansion Ratio
8) Isothermal elastic modulus = P
9) Slope of Isothermal curve
10) Fractional change in pressure
11) No two Isothermals intersect each other
. Adiabatic process :
1) Adiabatic change is that change of pressure and volume during which heat is neither given to the system, nor taken from it.
2) Temperature raises during adiabatic expansion.
3) It is a quick process.
4) Exchange of heat will not takes place between system and surroundings.
5) It should be conducted in perfectly bad conducting vessel.
6) It follows = constant ;
7) During adiabatic process, entropy remains constant. Hence it is also known as isoentropic process.
8) In adiabatic process, dQ = 0
9) In an adiabatic process
10) Work done in an adiabatic process is given by
11) Adiabatic Elastic Modulus =
12) Adiabatic Expansion of Gas is associated with decrease in pressure and temperature
13) Adiabatic compression of a gas associated with increase in pressure and temperature
14) Slope of adiabatic curve
.Slope of isothermal curve
slope of adiabatic curve > slope of isothermal curve.
.Graphs of expansion process
1. isobaric process
2. isothermal process
3. Adiabatic process
1. work done in isobaric process > isothermal > Adiabatic for same change in volume.
2. For same increase in volume, starting from same point, final pressure in adiabatic process is less than that of in isothermal process.
Graphs of compression process.
* isobaric process * isothermal process * Adiabatic process
1. work done in isobaric process > isothermal > Adiabatic process.
2. For same increase in pressure, starting from same point, final pressure in adiabatic process when compared to isothermal process.
3. Either in compression or expansion process, to produce equal change in volume of a gas, more pressure difference is required in adiabatic change.
. II law of Thermodynamics :
Planck statement :
It is impossible to construct a heat engine which can completely convert heat energy into mechanical energy without rejecting heat to surroundings.
Kelvin statement :
It is impossible to extract work from a system by cooling it below surrounding temperature. It is impossible to transfer heat energy from body at lower temperature to body at higher temperature unaided by external agency.
.Heat engine is a device which converts heat energy into mechanical work.
. Reversible and irreversible process :
(a)Reversible process :
Any process which can be made to proceed in reverse direction by variations in its conditions so that all changes occurring in the direct process are exactly reversed in the reverse process, is called a reversible process.
(b) Irreversible process :
Any process which cannot be made to proceed in reverse direction is called an irreversible process. A part of the energy of the system performs work against dissipative forces and it cannot be recovered back. A few examples of irreversible processes are, diffusion of gases, rusting of iron, sudden expansion of a gas, work done against friction etc.
. Carnot's reversible cycle:
It is a reversible engine which absorbs a heat 'Q1' from a source maintained at a constant high temperature 'T1' K and rejects a heat 'Q2' to a sink, which is maintained at a constant low temperature 'T2 'K. The efficiency '' of this engine is given by :
In terms of temperature the efficiency is given by
% Efficiency =
. Efficiency of ideal reversible engine :
(1) depends upon the temperature of the source (T1) and that of the sink (T2) ;
(2) is independent of the nature of the working substance ;
(3) is the same for all reversible engines working between the same two temperatures ;
(4) is directly proportional to the temperature difference (T1- T2) between the source and the sink ;
(5) is always less than 1 or 100% because T2can never be practically zero and hence Q2
can never be zero;
(6) is zero if the temperature of sink happens to be the same as the temperature of source.
. Types of heat engines :
Following two types of heat engines are in common use.1.
External combustion engine:
In this category of heat engines, the source of heat lies outside the engine. that is, heat is produced by burning the fuel outside the engine. Steam engine is the example of this class of heat engines.
2. Internal combustion engine:
In this category of heat engines, the source of heat lies inside the engine. that is, heat is produced by burning the fuel inside the engine. Petrol engine and diesel engine are the example of this class of heat engines.
(i) The working of steam engine is according to
in which Q1 heat is absorbed from the source and Q2
is rejected to sink. The efficiency of steam engine is given by :
(ii) Otto's PETROL ENGINE :
This engine works in FOUR STROKES which gives a reversible Ott-cycle.
. EFFICIENCY OF PETROL ENGINE:
Let Q1 be the heat generated due to the burning of the petrol. Suppose, Q 2
is the heat rejected into the atmosphere during release of the fuel mixture. Then the efficiency of the engine is
If volume of the fuel mixture at the end of the working stroke is V2
and that at the end of compression stroke is V1
, then it is found that
Where '' is the ratio of the specific heat of air at constant pressure to that at constant volume. Generally (V1/V2) is called adiabatic compression ratio. It may be denoted by ''. Then, we may write ;
Hence, equation (i) may be written as :
It is found that in practice cannot be more than 10. Because if we make more that 10 the rise in temperature during compression produces very high temperature. The petrol may be ignited before the completion of the compression stroke, which is not proper for the functioning of the engine. In practice may be around 5. For air
(i) A refrigerator is a reversible engine operating in the reverse direction.
In refrigerator, an amount of heat Q2
is removed from sink at lower rejected at higher temperature T1
to the source.Thus, Q2+ W = Q1
or W = Q1- Q2
Coefficient of performance.
It is defined as the ratio of amount of heat removed from Sink to the amount of work done in removing it. It is denoted by ''
b. A heat engine absorbs heat from a hot body, converts a part of it into work and rejects the rest to a cold body known as sink efficiency of heat engine is
(or) h =
Efficiency of heat engine always less than 1 (or) 100 %.
Temperature of source,
Temperature of sink.
.If the temperature of sink is zero, then
Heat is Transmitted by three methods namely, Conduction, Convection and Radiation.
1. It is the phenomenon of Heat transfer without the actual displacement of the particles of the medium. The particles of the medium execute vibratory motions
Heat Transfer in a metal rod (solid)
Steady State :
In the process of heat conduction through a conductor from hot end to cold end if no heat is absorbed by it along the conductor then it is called steady state of the conductor. The temperatures at different points of the conductor remain same.
(The temperature of each section is constant but not equal) Under steady state of the conductor,
i) Rate of flow of heat =
ii) Temperature gradient along the conductor =
= constant (where
3.Coefficient of Thermal Conductivity : K
The quantity of Heat conducted through a metal rod in steady state is
i) directly proportional to Area of cross section (A)
ii) directly proportional to temperature difference (
1-2) between hot and cold ends
iii) directly proportional to time of flow of heat (t)
iv) inversely proportional to length (l) of the rod.
K is coefficient of Thermal Conductivity of the material of the conductor. It is property of the material of the conductor. It is independent of dimensions of the conductor.
To define K :
It is defined as the Rate of flow of Heat per unit area of cross section per unit Temperature gradient in steady state.
Units of K CGS --- Cal s-1Cm-1°C-1SI --- Wm-1K-1DF of K : - MLT-3-1
Values of K :
For a perfect conductor K =
For a perfect Insulator K = 0
If K value is more, it is a good conductor of Heat
If K value is less, it is a bad conductor of Heat.
(vii) Among metals Silver, Copper, Aluminium, Iron are in decreasing order of good conducting nature.
1) Silver K = 408 Wm-1K-1
2) Copper K = 383Wm-1K-1
3) Aluminium K = 205 Wm-1K-1
4) Iron K = 63 Wm-1K-1
(viii) Mercury is the best conductor among liquids.
4. (i) K of good conductor is determined by
(ii) K of Insulator is determined by
Lee's Disc method.
5.Junction Temperature :
If two metal slabs of equal areas of cross section, having lengthsl1,l2
, coefficients of thermal conductivities k1,k2
and free and Temperatures q1, q2
are kept in contact with each other, then under steady state,
Junction temperature =
Special Case : Ifl1
6. i) Generally Solids are better conductors than Liquids, Liquids are better conductors than Gases.
ii) Metals are much better conductors than Non-Metals, because Metals contain Free electrons.
7.Thermal Diffusivity (or) Thermometric conductivity D :
It is the ratio of coefficient of Thermal conductivity (K) to Thermal Capacity per unit volume (ms/v) of a material.
Thermal Conductance (C) :
For a conductor ,
Thermal conductance =
Here K = Coefficient of thermal conductivity
A = Area of cross section
l = length of conductor
SI Unit : Watt (Kelvin)-1
9. i) Thermal resistance (R) of a conductor of lengthl,
cross - section (A) and conductivity (K) is given by the formula
Thermal Resistance =
SI unit : Kw-1
DF : M-1L-2T3q
ii)Thermal resistance =
iii) Rate of flow of heat in terms of thermal resistance 'R' is
Effective conductivity :
(i) Series Combination :
a) If Two rods of same cross sectional areas, having lengthsl1,l2 and conductivities K1,K2
are connected in Series, then in steady state the Conductivity of the combination() is such that
Special Case :
b) If Three rods of same cross sectional areas having lengths
and conductivities K1, K2, K3
are connected in Series, then in steady state the Conductivity of the combination ()is such that
ii) Parallel combination :
a) If two rods of same length having cross sectional Areas A1, A2 and conductivities K1, K 2
are arranged in Parallel, then in steady state the conductivity of the combination () is such that
Special Case :
If A1= A2
= A then
b) If three rods of same length having cross sectional areas A1, A2
, A3 and conductivities K1, K2, K3
are connected in Parallel, then in steady state, the Conductivity of the combination () is such that
11. i) Cooking utensils are made of metals which are good conductors of heat.
ii) In winter, a metal chair is colder to touch than a wooden chair at the same temperature. The reason is metal is a good conductor and wood is a bad conductor of heat.
iii) In summer, a metal chair is hotter to touch than a wooden chair at the same temperature.
iv) A block of metal and a block of wood can be felt equally cold or hot when touched, if they are at the temperature of the human body.
v) Hot rice cooked in a vessel can be touched while the vessel cannot be touched. Rice is a bad conductor of heat.
vi) Two thin blankets are warmer than a single thick blanket. The reason is air which is trapped in between the blankets is a poor conductor of heat.
vii) Davy's safety lamp used in mines works on the principle of heat conduction.
Growth of thickness of Ice layer on Ponds :
When atmospheric temperature falls below 0°C, water in a lake starts freezing.
i) The time taken to form an Ice layer of thickness x on the Pond is given by the formula
where r is density of Ice
L is latent Heat of Fusion of Ice
K is conductivity of Ice
q is Atmospheric temperature.
ii) To increase the thickness of ice layer from
1. It is the phenomenon of Heat transfer by the actual displacement of the particles of the medium in a fluid.
Ex. : Heat Transfer in Liquids & Gases
2. Convection which results from difference in densities is called natural convection.
Ex: A fluid heated in a container.
3. If a heated fluid is forced to move by a blower (or) pump then the phenomenon is called forced convection [induced convection]
Ex: Temperature of human body is kept constant by pumping blood with heart pump. Here the transfer of heat is by forced convection.
4. The rate of heat convection from an object is such that
Here A = Contact area
= Temperature difference between
the object and conductive fluid.
h = constant called convection coefficient. It depends on the properties of the fluid such as density, viscosity, specific heat and thermal conductivity.
5. In case of natural convection, convection currents move warm fluid upwards and cool fluid downwards. Hence, heating is done from base to top while cooling is from top to base.
6. Natural convection takes heat from the bottom to the top while forced convection may take heat in any direction.
7. Natural convection cannot take place in a gravity free region.
Orbiting satellite, freely falling lift
8. Natural convection is the principle in working of ventilator, working of a chimney, changes in climatic conditions, formation of Land & Sea breezes, Trade winds, ocean currents etc.,a
1. Radiation is the phenomenon of transfer of heat without necessity of a material medium. It is by virtue of electromagnetic waves.
Energy radiated from a body is called Radiant energy.
Rate of emission of radiant energy depends on
i) Nature of surface of the body
ii) Surface area of the body
iii) Temperature of the body and surroundings
Properties of Thermal Radiation:
(i) It is the invisible electromagnetic radiation emitted from a hot body.
(ii) It lies in I.R. region of wavelength range from 4 × 10-4m to 7.5 × 10-7m
(iii) It travels in vacuum with velocity of light (3 × 108ms-1). It can also travel through a medium without affecting it.
(iv) It exhibits the Phenomena of Reflection, Refraction, Interference, Diffraction and Polarisation like light.
(v) It obeys Inverse square law
where I = Intensity of radiation
d = distance from source.
(vi) It can be detected by Thermocouple, Thermopile, Bolometer, Pyrometer, Radio-micrometer, Differential air thermoscope etc.,
(vii) Its spectrum can be formed by prisms of Rock-salt, KCl etc.,
(viii) Rough and black surfaces are good absorbers while shining and smooth surfaces are good reflectors of heat radiation.
Ex : Transmission of heat from Sun to Earth.
3. i) The substances which absorb heat radiations and get themselves heated up are called athermanous substances
Eg: wood, water vapour, water, , metals, glass.
ii) The substances which allow heat radiations to pass through them are called diathermanous substances
Eg: dry air, rocksalt, quartz, sylvine
4. Prevost's theory of Heat Exchange :
i) Every body emits and absorbs heat radiations at all temperatures except at absolute zero (-273ºC)
ii) If a body emits more heat energy than what it absorbs from the surroundings, then its temparature falls.
iii) If a body absorbs more heat energy than what it emits then its temperature rises.
iv) If a body emits & absorbs heat in equal amounts, then it is said to be in Thermal equilibrium.
v) When the temperatures of body and surroundings are equalized, conduction and convection stop but the radiation exchange takes place.
5. Perfect blackbody:
i) It is a body which absorbs all the heat radiations incident on it.
ii) On heating, it emits radiations of all possible wavelengths at a given temperature.
iii) The wavelengths of the emitted heat radiations depend only on the temperature but are independent of the material of the blackbody.
Ex : Lamp black (96%), platinum black (98%) Fery's and Wien's black bodies are artificial black bodies.
'Sun' is natural blackbody.
6. Spectral emissive power () :
i) It is the amount of energy radiated by unit surface area per second per unit wavelength range at a given temperature.
ii) Emissive power depends upon Nature of the surface and temperature of the body.
iii) It is maximum for a perfect blackbody (
). It is minimum for a smooth, shining white surface.
7. Spectral absorptive power :
i) For a given wavelength and temperature, it is the ratio of radiant energy absorbed by unit surface area per second to that incident on it in the same time.
al = Qa/Qi
ii) For a perfect blackbody,
= 1 (Qi= Qa)
iii) Absorptive power depends upon
nature of the surface and temperature of the body.
Emissivity or relative emittance (e) :
For a perfect blackbody e = 1
For anybody 0 < e < 1
9. For a surface if a = Absorptive power,
r = Reflecting power, and t = Transmitting power then
a + r + t =1
for a black body r =0 , t = 0, a=1
Kirchoff's law :
For a given temperature and wavelength, the ratio of emissive power to absorptive power of all bodies is always a constant. The constant is equal to emissive power of a perfect blackbody at the same temperature and wavelength.
i) Good emitters are good absorbers and vice versa.
ii) With increase of temperature increases.
i) A white china cup with a black spot is heated to high temperature and kept in a dark room. The spot appears brighter than the remaining part, because black is good absorber and hence good emitter.
ii) A Red glass when heated to high temperature kept in a dark room, appears Green and vice versa
iii) A Yellow glass when heated to high temperature and kept in a dark room appears Blue and vice versa
The above pairs of colour are called complementary colours.
iv) Dark lines in solar spectrum are called Fraunhoffer lines. Some wave lengths of white light from photosphere are absorbed by some elements in chromosphere
On the day of solar eclipse, absorption spectrum is not seen, rather emission spectrum which is complimentary to earlier absorption spectrum is seen.
i) The amount of heat radiated per second from unit surface area of a black body (E) is proportional to Fourth Power of its absolute temperature (T).
=AT4(watt)where '' is Stefan's constant
= 5.67 × 10-8w/m2/K4
ii) If the body is not a black body, then
= e s AT4
('e' lies between 0 & 1)
e = Emissivity of the body.
Q = e At s 4
Stefan - Boltzmann Law :
i) If a blackbody at absolute temperature 'TB
' is in an enclosure at absolute temparature 'Ts
' then the loss of thermal energy by the body per unit time is
=A (TB4- TS4)ii) If it is not a blackbody, then
= eA (TB4- TS4) where e = emissivity
Newton's Law of cooling:
(i) The rate of cooling (the rate of fall of temperature) of a hot body is directly proportional to the difference between mean excess temperature of the body and the temperature of its surroundings
= Temperature of surroundings.
The body is cooling from
under forced convection
Where k is a constant, independent of time of cooling.
(ii) As the body cools, its rate of cooling goes on decreasing
(iii) Cooling curve of a hot body with time is exponential
To compare specific heats of two liquids with their cooling curves, the liquid with cooling curve of less slope is of more specific heat.
(iv)The body never cools below the temperature of surroundings.
(v) Newtons law of cooling is a special case of Stefan-Boltzman's Law.
When the heat loss by radiation is considered
Here m = mass of of the body
s = specific heat of the material of the body
(ii) Using Stefan - Boltzman's Law
(iii) Rate of cooling by radiation depends upon :
a) Nature of the radiating surface i.e., greater the emissivity, faster will be the cooling.
b) Area of the radiating surface , i.e., greater the area of radiating surface, faster will be the cooling
c) Temperature of the radiating body, i.e., greater the temperature faster will be the cooling.
d) Temperature of the surroundings i.e., greater the temperature of surroundings slower will be the cooling
e) Mass of the body i.e., greater the mass of the radiating body slower will be the cooling.
f) Sp.heat of the body i.e., greater the specific heat of the radiating body slower will be the cooling.
(iv). For a spherical body,
(v) A solid sphere and a hollow sphere of same material are of equal radii. They are heated to the same temperature and allowed to cool in the same environment. Now
a) The hollow sphere cools faster
b) The rate of loss of heat is same for both the spheres.
Energy Distribution in the Spectrum of black body radiation:
i.) At different temperatures graphs are drawn taking wavelengths (l) along x-axis and Energy density per unit wavelength range (i.e,) energy radiated per unit area per unit time per unit wavelength range (El) along y-axis.
ii.) The energy is not distributed uniformly at a given temperature
iii.) At any temperature, the radiations of wavelength 0 to
iv.) With the increase of temperature, the wavelength corresponding to maximum intensity decreases.
v.) The area under the curve gives the total radiant energy at that temperature. Increase of area with temperature is according to Stefan's law.
Laws of Distribution of Energy radiated:
Three laws were proposed to explain the distribution of energy radiated by a black body
1.) Wien's Law:
i.) Spectral energy density (El)
ii.) Energy distribution can be explained by the formula
iii.) This law is applicable to shorter wavelengths only (it is based upon classical mechanics)
2.) Rayleigh - Jean's law :
i.) Spectral Energy density El - 4
ii.) Energy distribution can be explained by the formula
K = 1.38 x 10
iii.) This law is applicable to longer wavelengths only (it is based upon statistical mechanics)
3.) Plank's law :
i.) A blackbody emits discrete energy packets called quanta, each having energy E = h
ii.) The Energy distribution can be explained by the formula
iii.) This law is applicable to all wavelength ranges (it is based upon quantum mechanics)
Wien's fifth power law :
The monochromatic energy density (El) of a blackbody corresponding to wavelength of maximum energy is directly proportional to fifth power of its absolute temperature
Where K = 1.29x10-5W
Wien's Displacement law :
In blackbody, radiations spectrum the wavelength corresponding to maximum energy (maximum intensity) is inversely proportional to its absolute temperature.
T = constant = b
where b = 2.9 × 10-3
mk = wein's constant
m- T graph is a Rectangular hyperbola
- T' graph is a
straight line passing through the origin. | http://www.goiit.com/posts/show/0/content-class-11-heat-thermodynamics-903651.htm?selectedParentTagId=194&selectedChildTagId=209&selectedTopicId=903651 | 13 |
80 | Theorems in Geometry
In our study of geometry, we will be deal with many geometric figures such as triangles and circles, and we will be concerned mostly with their properties. A property of a geometric figure is some interesting or important thing that is true about the figure. For example, a property of a triangle is that it has 3 sides; this property comes from the definition of a triangle. But once we define "triangle", we might notice that it has other properties.
Draw 2 different triangles on a piece of paper, and measure each side length with a ruler, and each angle measurement with a proctactor as shown in the example below:
Do you think there is a relationship between the lengths of the sides and the sizes of the angles? If so, this would be an important property of triangles! In the diagram above, notice the measures of the sides and angles. What appears to be true about the relationship between the side and angle measures? (If you have geometry software, you can try this on a computer. When you drag a vertex of the triangle, the lengths of the sides change.You will notice that the measures of the angles change also.)
Did you notice that the sides of the first triangle are all equal and the angles are all equal also, but in the second triangle the sides are unequal and the angles are unequal? This is an important property of triangles! Also, do you notice anything else about the angles and sides? In the triangle on the right, which side is the longest, and which angle is the largest? In that same triangle, which side is the shortest, and which angle is the smallest? If you draw more triangles and measure them, you will find that the largest angle in a triangle is always opposite the longest side, the "medium sized" angle in the triangle is opposite the "medium sized" side, and the smallest angle is opposite the shortest side. This is an important, and interesting property of triangles!
Mathematicians have, for centuries, explored geometric figures in order to discover their properties. Sometimes we have a feeling that we have discovered a property, something that seems to always be true, and we call this feeling a conjecture or hypothesis or theory. A conjecture is a belief that something is true. But even though we may feel very strongly that it is true, we are not absolutely positive that it's true: maybe there is one exception, one type of triangle in which it isn't true!
Where do they come from? Perhaps you have taken a science class, and done lab experiments. If you are trying to find the properties of a piece of rock, you might experiment with the rock, trying to determine its properties. You might see if it floats by putting it in water, or check to see if it is flammable by lighting a match to it. Whatever you found in your experiments you might then consider a property of the rock. If no one had ever discovered these properties for this particular rock, then these would beyour discoveries, and you could call them by your own name. If it turned out to be a very significant scientific discovery, you might even become famous! Much the same is true in mathematics. Mathematicians study geometric figures, make conjectures, experiment and test the relationships, and then try to prove that their conjectures are true.
As a mathematician works on a math problem, he or she may notice something interesting about the problem, something that seems to relate to another problem or seem to be true in more than just this one situation. The mathematician might then explore this idea in a number of ways. One way would be to do some very accurate diagrams, measure them, and try to check the validity of their conjecture. Software such as the Geometer's Sketchpad can make this process much easier, and more visual. If the conjecture seems to be true, then the mathematician could either accept that it is true without proof and hope that he or she is correct, or prove that it is true. A mathematical proof is a written verification that a conjecture is true. Once proved, the conjecture is called a theorem. (Occasionally, a proof is discovered to be incorrect, and the theorem is then in question, and the mathematician may be a bit embarrassed!)
"Euclid was a Greek mathematician who lived around 300 B.C. He established a mathematical school in Alexandria And created much of the geometry we study today.The name of Euclid is often considered synonymous with geometry. His book The Elements is one of the most important and influential works in the history of mathematics, having served as the basis, if not the actual text, for most geometrical teaching in the West for the past 2000 years. It contributed greatly to the 'geometrization' of mathematics and set the standard for rigor and logical structure for mathematical works.
In the thirteen books of The Elements, Euclid presents, in a very logical way, all of the elementary Greek geometrical knowledge of his day. This includes the theorems and constructions of plane geometry and solid geometry, along with the theory of proportions, number theory, and a type of geometrical algebra. The Elements is a textbook which gathers into one place the concepts and theorems which constitute the foundation of Greek mathematics. Euclid was not the first to write such a work. It is known that Hippocrates of Chios (440 B.C.) and others had composed books of elements before him. However, Euclid's treatise was quickly recognized as being superior to all previous Elements and none of the earlier works have survived.
Euclid's book The Elements contained Definitions, Postulates (sometimes called Axioms), and Propositions.
A Postulate or Axiom is a mathematical property that is assumed to be true without proof. (The Greeks made a distinction between postulates and axioms, but modern mathematicians usually do not.)
Some of the original axioms of Euclid, which he calls "common notions", are the following:
1.Things which are equal to the same thing are also equal to one another.
2.If equals are added to equals, the wholes are equal.
3.If equals are subtracted from equals, the remainders are equal.
4.The whole is greater than the part."
A proposition is usually a statement about the properties of a geometric object. We call them theorems, in modern geometry. They are accepted as true after having been proved. Most of what we will be studying in modern geometry are theorems, and the great majority of these theorems are based on Euclid's original Propositions. The Elements contained 353 Propositions. It is really amazing that these Propositions are still used today, after so many centuries!
According to another author, "Euclid's Elements form one of the most beautiful and influential works of science in the history of humankind. Its beauty lies in its logical development of geometry and other branches of mathematics. It has influenced all branches of science but none so much as mathematics and the exact sciences. The Elements have been studied 24 centuries in many languages starting, of course, in the original Greek, then in Arabic, Latin, and many modern languages."
This quote was taken from a very interesting website. If you would like to visit this site, click on the link below:
How do Mathematicians Decide if a Conjecture is True?
In this chapter of Connecting Geometry, you will use The Geometer's Sketchpad to experiment with some geometric figures. You will make discoveries about these figures, and you will be asked to write a brief paragraph proving that your conjecture is true. In future chapters, you will often be asked to experiment with other geometric figures. Your conjectures will be verified using various methods. A description of one of these methods of verification is written below.
Deductive proof is one way to prove things in mathematics. It depends on a kind of reasoning called deductive reasoning. When we use deductive reasoning, we write a series of statements, each of which is either some information that we were given (called the Given), a definition, a postulate, or a previously proved theorem. In the proof of the right angle theorem below, you can probably see which is the Given. A definition used in this proof is the definition of right angle, a postulate used in this proof is Euclid's first postulate. The final result of this proof is the Right Angles Theorem.
The sequential steps in this proof form what is called a chain of reasoning. In a chain of reasoning, each thought is dependent on, and follows from the previous thought, and all lead to the final conclusion. At the end, we have proved that which we set out to prove. This is what we mean by deductive proof. Notice that the theorem is written using the words if . . . and then . . . This is the standard way in which theorems are written. Although the theorem may seem to be obviously true, to be called a Theorem it nevertheless requires proof.The proof may seem short and lacking in real substance, but we do want to start off with a nice, short, easy one. Don't worry, they will get more interesting (and, more complex!).
Theorem: if two angles are right angles then they are equal.
GIVEN: angle A is a right angle, angle B is a right angle
PROVE: angle A = angle B
PROOF: since angles A and B are each right angles, angle A= 90° and angle B=90° by the definition of right angle. Since angle A and B are equal to the same thing (90°), they are equal to each other.
Theorem: if two angles are right angles then they are equal.
Mathematicians prove theorems in a variety of ways. Sometimes the proofs of theorems can be very brief and informal, such as the Paragraph Proof. Some are more formal, such as the Two Column Proof. Proofs can also be written using algebra and using coordinate geometry. In this course, we will explore each of these different types of proof as they apply to our studies in geometry.
Your project is to do some research of your own on the internet, on the History of Geometry. I would suggest you begin with the following links, to get warmed up. Then do a Net Search using Geometry as the Key Word. Write a 2 to 3 page report on what you discover, typing in a GSP file. Please do not just copy and print pages off the internet. You should search, read what you find, think about it, and write a summary/discussion of what you find. You may include direct quotations (a few sentences or even a paragraph), but be sure to put these portions in quotation marks, and include the web addresses to give credit to your sources as I have done above.
Go to Chapter 4 Congruent Triangles
Back to Chapter List
Back to Connecting Geometry Home Page | http://mathforum.org/sanders/connectinggeometry/ch_03Theorems.html | 13 |
105 | Space debris, also known as orbital debris, space junk, and space waste, is the collection of defunct objects in orbit around Earth. This includes everything from spent rocket stages, old satellites, fragments from disintegration, erosion, and collisions. Since orbits overlap with new spacecraft, debris may collide with operational spacecraft.
Currently about 19,000 pieces of debris larger than 5 cm are tracked, with another 300,000 pieces smaller than 1 cm below 2000 km altitude. For comparison, ISS orbits in the 300–400 km range and both the 2009 collision and 2007 antisat test events occurred at between 800–900 km.
Most space debris is less than 1 cm (0.39 in), including dust from solid rocket motors, surface degradation products such as paint flakes, and coolant released by RORSAT nuclear-powered satellites. Impacts of these particles cause erosive damage, similar to sandblasting. Damage can be reduced with "Whipple shield", which, for example, protects some parts of the International Space Station. However, not all parts of a spacecraft may be protected in this manner, e.g. solar panels and optical devices (such as telescopes, or star trackers), and these components are subject to constant wear by debris and micrometeoroids. The flux of space debris is greater than meteroids below 2000 km altitude for most sizes circa 2012.
Safety from debris over 10 cm (3.9 in), comes from maneuvering a spacecraft to avoid a collision. If a collision occurs, resulting fragments over 1 kg (2.2 lb) can become an additional collision risk. As the chance of collision is influenced by the number of objects in space, there is a critical density where the creation of new debris occurs faster than the various natural forces remove them. Beyond this point a runaway chain reaction may occur that pulverizes everything in orbit, including functioning satellites. Called the "Kessler syndrome", there is debate if the critical density has already been reached in certain orbital bands.
A runaway Kessler syndrome would render the useful polar-orbiting bands difficult to use, and greatly increase the cost of space launches and missions. Measurement, growth mitigation and active removal of space debris are activities within the space industry today.
In 1946, during the Giacobinid meteor shower, Helmut Landsberg collected several small magnetic particles that apparently are unintentionally associated with the shower. Fred Whipple was intrigued by this and wrote a paper that demonstrated that particles of this size were too small to maintain their velocity when they encountered the upper atmosphere. Instead, they quickly decelerated and then fell to Earth unmelted. In order to classify these sorts of objects, he coined the term "micro-meteorites".
Whipple, in collaboration with Fletcher Watson of the Harvard Observatory, led an effort to build an observatory to directly measure the velocity of the meteors that could be seen. At the time the source of the micro-meteorites was not known. Direct measurements at the new observatory were used to locate the source of the meteors, demonstrating that the bulk of material was left over from comet tails, and that none of it could be shown to have an extra-solar origin. Today it is understood that meteors of all sorts are leftover material from the formation of the solar system, consisting of particles from the interplanetary dust cloud or other objects made up from this material, like comets.
The early studies were based on optical measures only. In 1957, Hans Pettersson conducted one of the first direct measurements of the fall of space dust on the Earth, estimating it to be 14,300,000 tons per year. This suggested that the meteor flux in space was much higher than the number based on telescope observations. Such a high flux presented a very serious risk to missions deeper in space, specifically the high-orbiting Apollo capsules. To determine whether the direct measure was accurate, a number of additional studies followed, including the Pegasus satellite program. These showed that the rate of meteorites passing into the atmosphere, or flux, was in line with the optical measures, at around 10,000 to 20,000 tons per year.
Micrometeoroid shielding
Whipple's work pre-dated the space race and it proved useful when space exploration started only a few years later. His studies had demonstrated that the chance of being hit by a meteor large enough to destroy a spacecraft was extremely remote. However, a spacecraft would be almost constantly struck by micrometeorites, about the size of dust grains.
Whipple had already developed a solution to this problem in 1946. Originally known as a "meteor bumper" and now termed the Whipple shield, this consists of a thin foil film held a short distance away from the spacecraft's body. When a micrometeorite strikes the foil, it vaporizes into a plasma that quickly spreads. By the time this plasma crosses the gap between the shield and the spacecraft, it is so diffused that it is unable to penetrate the structural material below. The shield allows a spacecraft body to be built to just the thickness needed for structural integrity, while the foil adds little additional weight. Such a spacecraft is lighter than one with panels designed to stop the meteors directly.
For spacecraft that spend the majority of their time in orbit, some variety of the Whipple shield has been almost universal for decades. Later research showed that ceramic fibre woven shields offer better protection to hypervelocity (~7 km/s) particles than aluminium shields of equal weight. Another modern design uses multi-layer flexible fabric, as in NASA's TransHab expandable space habitation module.
Kessler's asteroid study
As space missions moved out from the Earth and into deep space, the question arose about the dangers posed by the asteroid belt environment, which probes would have to pass through on voyages to the outer solar system. Although Whipple had demonstrated that the near-Earth environment was not a problem for space travel, the same depth of analysis had not been applied to the belt. Starting in late 1968, Donald Kessler published a series of papers estimating the spatial density of asteroids. The main outcome of this work was the demonstration that risks in transiting the asteroid belt could be mitigated, and the maximum possible flux was about the same as the flux in near-Earth space. A few years later, the Pioneer and Voyager missions demonstrated this to be true by successfully transiting this region.
The evolution of the asteroid belt had been studied as a dynamic process since it was first considered by Ernst Öpik. Öpik's seminal paper considered the effect of gravitational influence of the planets on smaller objects, notably the Mars-crossing asteroids, noting that their expected lifetime was on the order of billions of years. A number of papers explored this work further, using elliptical orbits for all of the objects and introducing a number of mathematical refinements. Kessler used these methods to study Jupiter's moons, calculating expected lifetimes on the order of billions of years and demonstrating that several of the outer moons were almost certainly the result of recent collisions.
NORAD, Gabbard and Kessler
Since the earliest days of the space race, the North American Aerospace Defense Command (NORAD) has maintained a database of all known rocket launches and the various objects that reach orbit as a result – not just the satellites themselves, but the aerodynamic shields that protected them during launch, upper stage booster rockets that placed them in orbit, and in some cases, the lower stages as well. This was known as the Space Object Catalog when it was created with the launch of Sputnik in 1957. NASA published modified versions of the database in the now common two-line element set format via mail, and starting in the early 1980s, the CelesTrak Bulletin Board System (BBS) re-published them.
The trackers that fed this database were aware of a number of other objects in orbit, many of which were the result of on-orbit explosions. Some of these were deliberately caused during the 1960s anti-satellite weapon (ASAT) testing, while others were the result of rocket stages that had "blown up" in orbit as leftover propellant expanded into a gas and ruptured their tanks. Since these objects were only being tracked in a haphazard manner, a NORAD employee, John Gabbard, took it upon himself to keep a separate database of as many of these objects as he could. Studying the results of these explosions, Gabbard developed a new technique for predicting the orbital paths of their products. "Gabbard diagrams" (or plots) have since become widely used. Along with Preston Landry, these studies were used to dramatically improve the modelling of orbital evolution and decay.
When NORAD's database first became publicly available in the 1970s, Kessler applied the same basic technique developed for the asteroid belt study to the database of known objects. In 1978, Kessler and Burton Cour-Palais co-authored the seminal Collision Frequency of Artificial Satellites: The Creation of a Debris Belt, which showed that the same process that controlled the evolution of the asteroids would cause a similar collisional process in low Earth orbit (LEO), but instead of billions of years, the process would take just decades. The paper concluded that by about the year 2000, the collisions from debris formed by this process would outnumber micrometeorites as the primary ablative risk to orbiting spacecraft.
At the time this did not seem like cause for major concern, as it was widely held that drag from the upper atmosphere would de-orbit the debris faster than it was being created. However, Gabbard was aware that the number of objects in space was under-represented in the NORAD data, and was familiar with the sorts of debris and their behaviour. Shortly after Kessler's paper was published, Gabbard was interviewed on the topic, and he coined the term "Kessler syndrome" to refer to the orbital regions where the debris had become a significant issue. The reporter used the term verbatim, and when it was picked up in a Popular Science article in 1982, the term became widely used. The article won the Aviation/Space Writers Association's 1982 National Journalism Award.
Follow-up studies
A lack of good data about the debris problem prompted a series of studies to better characterize the LEO environment. In October 1979 NASA provided Kessler with additional funding for further studies of the problem. Several approaches were used by these studies.
Optical telescopes or short-wavelength radars were used to more accurately measure the number and size of objects in space. These measurements demonstrated that the published population count was too low by at least 50%. Before this it was believed that the NORAD database was essentially complete and accounted for at least the majority of large objects in orbit. These measurements demonstrated that some objects (typically U.S. military spacecraft) were deliberately eliminated from the NORAD list, while many others were not included because they were considered unimportant, and the list could not easily account for objects under 20 cm (7.9 in) in size. In particular, the debris left over from exploding rocket stages and several 1960s anti-satellite tests were only tracked in a haphazard way with the main database.
Space-flown spacecraft were examined with microscopes to look for tiny impacts. Sections of Skylab and the Apollo CSMs that had been recovered were pitted. Every study demonstrated that the debris flux was much higher than expected, and that the debris was already the primary source of collisions in space. LEO was shown to be subject to the Kessler Syndrome, as originally defined. See also Solar Maximum Mission, the Long Duration Exposure Facility, Space Shuttle missions.
In 1981 Kessler discovered 42% of all cataloged debris was the result of only 19 events, mostly explosions of spent rocket stages, especially U.S. Delta rockets. Kessler made this discovery using Gabbard's methods against known debris fields, which overturned the previously held belief that most unknown debris was from old ASAT tests. The Delta remained a workhorse of the U.S. space program, and there were numerous other Delta components in orbit that had not yet exploded.
A new Kessler Syndrome
Through the 1980s, the US Air Force ran an experimental program to determine what would happen if debris collided with satellites or other debris. The study demonstrated that the process was entirely unlike the micrometeor case, and that many large chunks of debris would be created that would themselves be a collisional threat. This leads to a worrying possibility – instead of the density of debris being a measure of the number of items launched into orbit, it was that number plus any new debris caused when they collided. If the new debris did not decay from orbit before impacting another object, the number of debris items would continue to grow even if there were no new launches.
In 1991 Kessler published a new work using the best data then available. In "Collisional cascading: The limits of population growth in low earth orbit" he mentioned the USAF's conclusions about the creation of debris. Although the vast majority of debris objects by number was lightweight, like paint flecks, the majority of the mass was in heavier debris, about 1 kg (2.2 lb) or heavier. This sort of mass would be enough to destroy any spacecraft on impact, creating more objects in the critical mass area. As the National Academy of Sciences put it:
A 1-kg object impacting at 10 km/s, for example, is probably capable of catastrophically breaking up a 1,000-kg spacecraft if it strikes a high-density element in the spacecraft. In such a breakup, numerous fragments larger than 1 kg would be created.
Kessler's analysis led to the conclusion that the problem could be categorized into three regimes. With a low enough density, the addition of debris through impacts is slower than their rate of decay, and the problem does not become significant. Beyond that is a critical density where additional debris lead to additional collisions. At densities greater than this critical point, the rate of production is greater than decay rates, leading to a "cascade", or chain reaction, that reduces the on-orbit population to small objects on the order of a few cm in size, making any sort of space activity very hazardous. This third condition, the chain reaction, became the new use of the term "Kessler Syndrome".
In a historical overview written in early 2009, Kessler summed up the situation bluntly:
Aggressive space activities without adequate safeguards could significantly shorten the time between collisions and produce an intolerable hazard to future spacecraft. Some of the most environmentally dangerous activities in space include large constellations such as those initially proposed by the Strategic Defense Initiative in the mid-1980s, large structures such as those considered in the late-1970s for building solar power stations in Earth orbit, and anti-satellite warfare using systems tested by the USSR, the U.S., and China over the past 30 years. Such aggressive activities could set up a situation where a single satellite failure could lead to cascading failures of many satellites in a period of time much shorter than years.
Debris growth
Faced with this scenario, as early as the 1980s NASA and other groups within the U.S. attempted to limit the growth of debris. One particularly effective solution was implemented by McDonnell Douglas on the Delta booster, by having the booster move away from their payload and then venting any remaining propellant in the tanks. This eliminated the pressure build-up in the tanks that had caused them to explode in the past. Other countries, however, were not as quick to adopt this sort of measure, and the problem continued to grow throughout the 1980s, especially due to a large number of launches in the Soviet Union.
A new battery of studies followed as NASA, NORAD and others attempted to better understand exactly what the environment was like. Every one of these studies adjusted the number of pieces of debris in this critical mass zone upward. In 1981 when Schefter's article was published it was placed at 5,000 objects, but a new battery of detectors in the Ground-based Electro-Optical Deep Space Surveillance system quickly found new objects within its resolution. By the late 1990s it was thought that the majority of 28,000 launched objects had already decayed and about 8,500 remained in orbit. By 2005 this had been adjusted upward to 13,000 objects, and a 2006 study raised this to 19,000 as a result of an ASAT test and a satellite collision. In 2011, NASA said 22,000 different objects were being tracked.
The growth in object count as a result of these new studies has led to intense debate within the space community on the nature of the problem and earlier dire warnings. Following Kessler's 1991 derivation, and updates from 2001, the LEO environment within the 1,000 km (620 mi) altitude range should now be within the cascading region. However, only one major incident has occurred: the 2009 satellite collision between Iridium 33 and Cosmos 2251. The lack of any obvious cascading in the short term has led to a number of complaints that the original estimates overestimated the issue. Kessler has pointed out that the start of a cascade would not be obvious until the situation was well advanced, which might take years.
A 2006 NASA model suggested that even if no new launches took place, the environment would continue to contain the then-known population until about 2055, at which point it would increase on its own. Richard Crowther of Britain's Defence Evaluation and Research Agency stated that he believes the cascade will begin around 2015. The National Academy of Sciences, summarizing the view among professionals, noted that there was widespread agreement that two bands of LEO space, 900 to 1,000 km (620 mi) and 1,500 km (930 mi) altitudes, were already past the critical density.
In the 2009 European Air and Space Conference, University of Southampton, UK researcher, Hugh Lewis predicted that the threat from space debris would rise 50 percent in the coming decade and quadruple in the next 50 years. Currently more than 13,000 close calls are tracked weekly.
A report in 2011 by the National Research Council in the USA warned NASA that the amount of space debris orbiting the Earth was at critical level. Some computer models revealed that the amount of space debris "has reached a tipping point, with enough currently in orbit to continually collide and create even more debris, raising the risk of spacecraft failures". The report has called for international regulations to limit debris and research into disposing of the debris.
Large vs. small
Any discussion of space debris generally categorizes large and small debris. "Large" is defined not by its size so much as the current ability to detect objects of some lower size limit. Generally, large is taken to be 10 cm (3.9 in) across or larger, with typical masses on the order of 1 kg (2.2 lb). Logically it would follow that small debris would be anything smaller than that, but in fact the cutoff is normally 1 cm (0.39 in) or smaller. Debris between these two limits would normally be considered "large" as well, but goes unmeasured due to our inability to track them.
The great majority of debris consists of smaller objects, 1 cm (0.39 in) or less. The mid-2009 update to the NASA debris FAQ places the number of large debris items over 10 cm (3.9 in) at 19,000, between 1 and 10 centimetres (3.9 in) approximately 500,000, and that debris items smaller than 1 cm (0.39 in) exceeds tens of millions. In terms of mass, the vast majority of the overall weight of the debris is concentrated in larger objects, using numbers from 2000, about 1,500 objects weighing more than 100 kg (220 lb) each account for over 98% of the 1,900 tons of debris then known in low earth orbit.
Since space debris comes from man-made objects, the total possible mass of debris is easy to calculate: it is the total mass of all spacecraft and rocket bodies that have reached orbit. The actual mass of debris will be necessarily less than that, as the orbits of some of these objects have since decayed. As debris mass tends to be dominated by larger objects, most of which have long ago been detected, the total mass has remained relatively constant in spite of the addition of many smaller objects. Using the figure of 8,500 known debris items from 2008, the total mass is estimated at 5,500 t (5,400 long tons; 6,100 short tons).
Debris in LEO
Every satellite, space probe and manned mission has the potential to create space debris. Any impact between two objects of sizeable mass can spall off shrapnel debris from the force of collision. Each piece of shrapnel has the potential to cause further damage, creating even more space debris. With a large enough collision (such as one between a space station and a defunct satellite), the amount of cascading debris could be enough to render Low Earth Orbit essentially unusable.
The problem in LEO is compounded by the fact that there are few "universal orbits" that keep spacecraft in particular rings, as opposed to GEO, a single widely used orbit. The closest would be the sun-synchronous orbits that maintain a constant angle between the sun and orbital plane. But LEO satellites are in many different orbital planes providing global coverage, and the 15 orbits per day typical of LEO satellites results in frequent approaches between object pairs. Since sun-synchronous orbits are polar, the polar regions are common crossing points.
After space debris is created, orbital perturbations mean that the orbital plane's direction will change over time, and thus collisions can occur from virtually any direction. Collisions thus usually occur at very high relative velocities, typically several kilometres per second. Such a collision will normally create large numbers of objects in the critical size range, as was the case in the 2009 collision. It is for this reason that the Kessler Syndrome is most commonly applied only to the LEO region. In this region a collision will create debris that will cross other orbits and this population increase leads to the cascade effect.
At the most commonly used low earth orbits for manned missions, 400 km (250 mi) and below, residual air drag helps keep the zones clear. Collisions that occur under this altitude are less of an issue, since they result in fragment orbits having perigee at or below this altitude. The critical altitude also changes as a result of the space weather environment, which causes the upper atmosphere to expand and contract. An expansion of the atmosphere leads to an increased drag to the fragments, resulting in a shorter orbit lifetime. An expanded atmosphere for some period of time in the 1990s is one reason the orbital debris density remained lower for some time. Another was the rapid reduction in launches by Russia, which conducted the vast majority of launches during the 1970s and 80s.
Debris at higher altitudes
At higher altitudes, where atmospheric drag is less significant, orbital decay takes much longer. Slight atmospheric drag, lunar perturbations, and solar radiation pressure can gradually bring debris down to lower altitudes where it decays, but at very high altitudes this can take millennia. Thus while these orbits are generally less used than LEO,[clarification needed] and the problem onset is slower as a result, the numbers progress toward the critical threshold much more quickly.
The issue is especially problematic in the valuable geostationary orbits (GEO), where satellites are often clustered over their primary ground "targets" and share the same orbital path. Orbital perturbations are significant in GEO, causing longitude drift of the spacecraft, and a precession of the orbit plane if no maneuvers are performed. Active satellites maintain their station via thrusters, but if they become inoperable they become a collision concern (as in the case of Telstar 401). There has been estimated to be one close (within 50 meters) approach per year.
On the upside, relative velocities in GEO are low, compared with those between objects in largely random low earth orbits. The impact velocities peak at about 1.5 km/s (0.93 mi/s). This means that the debris field from such a collision is not the same as a LEO collision and does not pose the same sort of risks, at least over the short term. It would, however, almost certainly knock the satellite out of operation. Large-scale structures, like solar power satellites, would be almost certain to suffer major collisions over short periods of time.
In response, the ITU has placed increasingly strict requirements on the station-keeping ability of new satellites and demands that the owners guarantee their ability to safely move the satellites out of their orbital slots at the end of their lifetime. However, studies have suggested that even the existing ITU requirements are not enough to have a major effect on collision frequency. Additionally, GEO orbit is too distant to make accurate measurements of the existing debris field for objects under 1 m (3 ft 3 in), so the precise nature of the existing problem is not well known. Others have suggested that these satellites be moved to empty spots within GEO, which would require less maneuvering and make it easier to predict future motions. An additional risk is presented by satellites in other orbits, especially those satellites or boosters left stranded in geostationary transfer orbit, which are a concern due to the typically large crossing velocities.
In spite of these efforts at risk reduction, spacecraft collisions have taken place. The ESA telecommunications satellite Olympus-1 was hit by a meteor on 11 August 1993 and left adrift. On 24 July 1996, Cerise, a French microsatellite in a sun-synchronous LEO, was hit by fragments of an Ariane-1 H-10 upper-stage booster that had exploded in November 1986. On 29 March 2006, the Russian Express-AM11 communications satellite was struck by an unknown object which rendered it inoperable. Luckily, the engineers had enough time in contact with the spacecraft to send it to a parking orbit out of GEO.
Sources of debris
Dead spacecraft
In 1958 the United States launched Vanguard I into a medium Earth orbit (MEO). It became one of the longest surviving pieces of space junk and as of October 2009[update] is the oldest piece of junk still in orbit.
In a catalog listing known launches up to July 2009, the Union of Concerned Scientists listed 902 operational satellites. This is out of a known population of 19,000 large objects and about 30,000 objects ever launched. Thus, operational satellites represent a small minority of the population of man-made objects in space. The rest are, by definition, debris.
One particular series of satellites presents an additional concern. During the 1970s and 80s the Soviet Union launched a number of naval surveillance satellites as part of their RORSAT (Radar Ocean Reconnaissance SATellite) program. These satellites were equipped with a BES-5 nuclear reactor in order to provide enough energy to operate their radar systems. The satellites were normally boosted into a medium altitude graveyard orbit, but there were several failures that resulted in radioactive material reaching the ground (see Kosmos 954 and Kosmos 1402). Even those successfully disposed of now face a debris issue of their own, with a calculated probability of 8% that one will be punctured and release its coolant over any 50-year period. The coolant self-forms into droplets up to around some centimeters in size and these represent a significant debris source of their own.
Lost equipment
According to Edward Tufte's book 'Envisioning Information', space debris objects have included a glove lost by astronaut Ed White on the first American space-walk (EVA); a camera Michael Collins lost near the spacecraft Gemini 10; garbage bags jettisoned by the Soviet cosmonauts throughout the Mir space station's 15-year life; a wrench and a toothbrush. Sunita Williams of STS-116 lost a camera during EVA. In an EVA to reinforce a torn solar panel during STS-120, a pair of pliers was lost and during STS-126, Heidemarie Stefanyshyn-Piper lost a briefcase-sized tool bag in one of the mission's EVAs.
Lower stages, like the solid rocket boosters of the Space Shuttle, or the Saturn IB stage of the Apollo program era, do not reach orbital velocities and do not add to the mass load in orbit. Upper stages, like the Inertial Upper Stage, start and end their productive lives in orbit. Boosters that remain on orbit are a serious debris problem, and one of the major known impact events was due to an Ariane booster. During the initial attempts to characterize the space debris problem, it became evident that a good proportion of all debris was due to the breaking up of rocket stages. Although NASA and the USAF quickly made efforts to improve the survivability of their boosters, other launchers did not implement similar changes.
An event of similar magnitude occurred on 19 February 2007, when a Russian Briz-M booster stage exploded in orbit over South Australia. The booster had been launched on 28 February 2006 carrying an Arabsat-4A communication satellite but malfunctioned before it could use all of its propellant. The explosion was captured on film by several astronomers, but due to the path of the orbit the debris cloud has been hard to quantify using radar. As of 21 February 2007, over 1,000 fragments had been identified. A third break-up event occurred on 14 February 2007 as recorded by Celes Trak. Eight break-ups occurred in 2006, the most break-ups since 1993.
Another Briz-M broke up on 16 October 2012 after failing on the Proton launch of 6 August. The amount and severity of the debris is yet to be determined.
Debris from and as a weapon
One major source of debris in the past was the testing of anti-satellite weapons carried out by both the U.S. and Soviet Union in the 1960s and '70s. The NORAD element files only contained data for Soviet tests, and it was not until much later that debris from U.S. tests was identified. By the time the problem with debris was understood, widespread ASAT testing had ended. The U.S.'s only active weapon, Program 437, was shut down in 1975.
The U.S. restarted their ASAT programs in the 1980s with the Vought ASM-135 ASAT. A 1985 test destroyed a 1 t (2,200 lb) satellite orbiting at 525 km (326 mi) altitude, creating thousands of pieces of space debris larger than 1 cm (0.39 in). Because it took place at relatively low altitude, atmospheric drag caused the vast majority of the large debris to decay from orbit within a decade. Following the U.S. test in 1985, there was a de facto moratorium on such tests.
China was widely condemned after their 2007 anti-satellite missile test, both for the military implications as well as the huge amount of debris it created. This is the largest single space debris incident in history in terms of new objects, estimated to have created more than 2,300 pieces (updated 13 December 2007) of trackable debris (approximately golf ball size or larger), over 35,000 pieces 1 cm (0.4 in) or larger, and 1 million pieces 1 mm (0.04 in) or larger. The test took place in the part of near Earth space most densely populated with satellites, as the target satellite orbited between 850 km (530 mi) and 882 km (548 mi). Since the atmospheric drag is quite low at that altitude, the debris might be less likely to return to Earth. In June 2007, NASA's Terra environmental spacecraft was the first to perform a maneuver in order to prevent impacts from this debris.
On 20 February 2008, the U.S. launched an SM-3 Missile from the USS Lake Erie specially to destroy a defective U.S. spy satellite thought to be carrying 1,000 lb (450 kg) of toxic hydrazine propellant. Since this event occurred at about 250 km (155 mi) altitude, all of the resulting debris have a perigee of 250 km (155 mi) or lower. The missile was aimed to deliberately reduce the amount of debris as much as possible, and according to US state sources, they had supposedly decayed by early 2008.
The vulnerability of satellites to a collision with larger debris and the ease of launching such an attack against a low-flying satellite, has led some to speculate that such an attack would be within the capabilities of countries unable to make a precision attack like former U.S. or Soviet systems. Such an attack against a large satellite of 10 tonnes or more would cause enormous damage to the LEO environment.
Operational aspects
Threat to unmanned spacecraft
Spacecraft in a debris field are subject to constant wear as a result of impacts with small debris. Critical areas of a spacecraft are normally protected by Whipple shields, eliminating most damage. However, low-mass impacts have a direct impact on the lifetime of a space mission, if the spacecraft is powered by solar panels. These panels are difficult to protect because their front face has to be directly exposed to the sun. As a result, they are often punctured by debris. When hit, panels tend to produce a cloud of gas-sized particles that, compared to debris, does not present as much of a risk to other spacecraft. This gas is generally a plasma when created and consequently presents an electrical risk to the panels themselves.
The effect of the many impacts with smaller debris was particularly notable on Mir, the Soviet space station, as it remained in space for long periods of time with the panels originally launched on its various modules.
Impacts with larger debris normally destroy the spacecraft. To date there have been several known and suspected impact events. The earliest on record was the loss of Kosmos 1275, which disappeared on 24 July 1981 only a month after launch. Tracking showed it had suffered some sort of breakup with the creation of 300 new objects. Kosmos did not contain any volatiles and is widely assumed to have suffered a collision with a small object. However, proof is lacking, and an electrical battery explosion has been offered as a possible alternative. Kosmos 1484 suffered a similar mysterious breakup on 18 October 1993.
Several confirmed impact events have taken place since then. Olympus-1 was hit by a meteor on 11 August 1993 and left adrift. On 24 July 1996, the French microsatellite Cerise was hit by fragments of an Ariane-1 H-10 upper-stage booster that had exploded in November 1986. On 29 March 2006 the Russian Express-AM11 communications satellite was struck by an unknown object which rendered it inoperable. Luckily, the engineers had enough time in contact with the spacecraft to send it to a parking orbit out of GEO.
The first major space debris collision was on 10 February 2009 at 16:56 UTC. The deactivated 950 kg (2,100 lb) Kosmos 2251 and an operational 560 kg (1,200 lb) Iridium 33 collided 500 mi (800 km) over northern Siberia. The relative speed of impact was about 11.7 km/s (7.3 mi/s), or approximately 42,120 km/h (26,170 mph). Both satellites were destroyed and the collision scattered considerable debris, which poses an elevated risk to spacecraft. The collision created a debris cloud, although accurate estimates of the number of pieces of debris is not yet available.
On 22 January 2013, a Russian laser-ranging satellite was hit by a piece of debris suspected to be from the Chinese ASAT test of 2007. Both the orbit and the spin rate where changed.
In a Kessler Syndrome cascade, satellite lifetimes would be measured on the order of years or months. New satellites could be launched through the debris field into higher orbits or placed in lower ones where natural decay processes remove the debris, but it is precisely because of the utility of the orbits between 800 and 1,500 km (500 and 930 mi) that this region is so filled with debris.
Threat to manned spacecraft
From the earliest days of the Space Shuttle missions, NASA has turned to NORAD's database to constantly monitor the orbital path in front of the Shuttle to find and avoid any known debris. During the 1980s, these simulations used up a considerable amount of the NORAD tracking system's capacity. The first official Space Shuttle collision avoidance maneuver was during STS-48 in September 1991. A 7-second reaction control system burn was performed to avoid debris from the Cosmos satellite 955. Similar manoeuvres followed on missions 53, 72 and 82.
One of the first events to widely publicize the debris problem was Space Shuttle Challenger's second flight on STS-7. A small fleck of paint impacted Challenger's front window and created a pit over 1 mm (0.04 in) wide. Endeavour suffered a similar impact on STS-59 in 1994, but this one pitted the window for about half its depth: a cause for much greater concern. Post-flight examinations have noted a marked increase in the number of minor debris impacts since 1998.
The damage due to smaller debris has now grown to become a significant problem in its own right. Chipping of the windows became common by the 1990s, along with minor damage to the thermal protection system tiles (TPS). To mitigate the impact of these events, once the Shuttle reached orbit it was deliberately flown tail first in an attempt to intercept as much of the debris load as possible on the engines and rear cargo bay. These were not used on orbit or during descent and thus were less critical to operations after launch. When flown to the ISS, the Shuttle was placed where the station provided as much protection as possible.
The sudden increase in debris load led to a re-evaluation of the debris issue and a catastrophic impact with large debris was considered to be the primary threat to Shuttle operations on every mission. Mission planning required a thorough discussion of debris risk, with an executive level decision to proceed if the risk is greater than 1 in 200 of destroying the Shuttle. On a normal low-orbit mission to the ISS the risks were estimated to be 1 in 300, but the STS-125 mission to repair the Hubble Space Telescope at 350 mi (560 km) was initially calculated at 1 in 185 due to the 2009 satellite collision, and threatened to cancel the mission. However, a re-analysis as better debris numbers became available reduced this to 1 in 221, and the mission was allowed to proceed.
In spite of their best efforts, however, there have been two serious debris incidents on more recent Shuttle missions. In 2006, Atlantis was hit by a small fragment of a circuit board during STS-115, which bored a small hole through the radiator panels in the cargo bay (the large gold coloured objects visible when the doors are open). A similar incident followed on STS-118 in 2007, when Endeavour was hit in a similar location by unknown debris which blew a hole several centimetres in diameter through the panel.
The International Space Station (ISS) uses extensive Whipple shielding to protect itself from minor debris threats. However, large portions of the ISS cannot be protected, notably its large solar panels. In 1989 it was predicted that the International Space Station's panels would suffer about 0.23% degradation over four years, which was dealt with by overdesigning the panel by 1%. New figures based on the increase in collisions since 1998 are not available.
Like the Shuttle, the only protection against larger debris is avoidance. On two occasions the crew have been forced to abandon work and take refuge in the Soyuz capsule while the threat passed. This close call is a good example of the potential Kessler Syndrome; the debris is believed to be a small 10 cm (3.9 in) portion of the former Cosmos 1275, which is the satellite that is considered to be the first example of an on-orbit impact with debris.
If the Kessler Syndrome comes to pass, the threat to manned missions may be too great to contemplate operations in LEO. Although the majority of manned space activities take place at altitudes below the critical 800 to 1,500 km (500 to 930 mi) regions, a cascade within these areas would result in a constant rain down into the lower altitudes as well. The time scale of their decay is such that "the resulting debris environment is likely to be too hostile for future space use."
Hazard on Earth
Although most debris will burn up in the atmosphere, larger objects can reach the ground intact and present a risk.
The original re-entry plan for Skylab called for the station to remain in space for 8 to 10 years after its final mission in February 1974. Unexpectedly high solar activity expanded the upper atmosphere resulting in higher than expected drag on space station bringing its orbit closer to Earth than planned. On 11 July 1979, Skylab re-entered the Earth's atmosphere and disintegrated, raining debris harmlessly along a path extending over the southern Indian Ocean and sparsely populated areas of Western Australia.
On 12 January 2001, a Star 48 Payload Assist Module (PAM-D) rocket upper stage re-entered the atmosphere after a "catastrophic orbital decay". The PAM-D stage crashed in the sparsely populated Saudi Arabian desert. It was positively identified as the upper-stage rocket for NAVSTAR 32, a GPS satellite launched in 1993.
The Columbia disaster in 2003 demonstrated this risk, as large portions of the spacecraft reached the ground. In some cases entire equipment systems were left intact. NASA continues to warn people to avoid contact with the debris due to the possible presence of hazardous chemicals.
On 27 March 2007, wreckage from a Russian spy satellite was spotted by Lan Chile (LAN Airlines) in an Airbus A340, which was travelling between Santiago, Chile, and Auckland, New Zealand carrying 270 passengers. The pilot estimated the debris was within 8 km of the aircraft, and he reported hearing the sonic boom as it passed. The aircraft was flying over the Pacific Ocean, which is considered one of the safest places in the world for a satellite to come down because of its large areas of uninhabited water.
In 1969, five sailors on a Japanese ship were injured by space debris, probably of Russian origin. In 1997 an Oklahoma woman named Lottie Williams was hit in the shoulder by a 10 cm × 13 cm (3.9 in × 5.1 in) piece of blackened, woven metallic material that was later confirmed to be part of the propellant tank of a Delta II rocket which had launched a U.S. Air Force satellite in 1996. She was not injured.
Tracking and measurement
Tracking from the ground
Radar and optical detectors such as lidar are the main tools used for tracking space debris. However, determining orbits to allow reliable re-acquisition is problematic. Tracking objects smaller than 10 cm (4 in) is difficult due to their small cross-section and reduced orbital stability, though debris as small as 1 cm (0.4 in) can be tracked. NASA Orbital Debris Observatory tracked space debris using a 3 m (10 ft) liquid mirror transit telescope.
The U.S. Strategic Command maintains a catalogue containing known orbital objects. The list was initially compiled in part to prevent misinterpretation as hostile missiles. The version compiled in 2009 listed about 19,000 objects. Observation data gathered by a number of ground-based radar facilities and telescopes as well as by a space-based telescope is used to maintain this catalogue. Nevertheless, the majority of expected debris objects remain unobserved – there are more than 600,000 objects larger than 1 cm (0.4 in) in orbit (according to the ESA Meteoroid and Space Debris Terrestrial Environment Reference, the MASTER-2005 model).
Other sources of knowledge on the actual space debris environment include measurement campaigns by the ESA Space Debris Telescope, TIRA (System), Goldstone radar, Haystack radar, the EISCAT radars, and the Cobra Dane phased array radar. The data gathered during these campaigns is used to validate models of the debris environment like ESA-MASTER. Such models are the only means of assessing the impact risk caused by space debris, as only larger objects can be regularly tracked.
Measurement in space
Returned space debris hardware is a valuable source of information on the (sub-millimetre) space debris environment. The LDEF satellite deployed by STS-41-C Challenger and retrieved by STS-32 Columbia spent 68 months in orbit. Close examination of its surfaces allowed an analysis of the directional distribution and composition of the debris flux. The EURECA satellite deployed by STS-46 Atlantis in 1992 and retrieved by STS-57 Endeavour in 1993 was similarly used for debris studies.
The solar arrays of the Hubble Space Telescope returned during missions STS-61 Endeavour and STS-109 Columbia are an important source of information on the debris environment. The impact craters found on the surface were counted and classified by ESA to provide a means for validating debris environment models. Similar materials returned from Mir were extensively studied, notably the Mir Environmental Effects Payload which studied the environment in the Mir area.
Gabbard diagrams
Space debris groups resulting from satellite breakups are often studied using scatter plots known as Gabbard diagrams. In a Gabbard diagram, the perigee and apogee altitudes of the individual debris fragments resulting from a collision are plotted with respect to the orbital period of each fragment. The distribution can be used to infer information such as direction and point of impact.
Dealing with debris
Manmade space debris have been dropping out of orbit at an average rate of about one object per day for the past 50 years. Substantial variation in the average rate occurs as a result of the 11-year solar activity cycle, averaging closer to three objects per day at solar max due to the heating, and resultant expansion, of the Earth's atmosphere. At solar min, five and one-half years later, the rate averages about one every three days.
Growth mitigation
In order to reduce future space debris, various ideas have been proposed. The passivation of spent upper stages by the release of residual propellants is aimed at reducing the risk of on-orbit explosions that could generate thousands of additional debris objects. The modification[clarification needed] of the Delta boosters, at a time when the debris problem was first becoming apparent, essentially eliminated their further contribution to the problem.
There is no international treaty mandating behaviour to minimize space debris, but the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS) did publish voluntary guidelines in 2007. As of 2008, the committee is discussing international "rules of the road" to prevent collisions between satellites. NASA has implemented its own procedures for limiting debris production as have some other space agencies, such as the European Space Agency. Starting in 2007, the ISO has been preparing a new standard dealing with space debris mitigation.
One idea is "one-up/one-down" launch license policy for Earth orbits. Launch vehicle operators would have to pay the cost of debris mitigation. They would need to build the capability into their launch vehicle-robotic capture, navigation, mission duration extension, and substantial additional propellant – to be able to rendezvous with, capture and deorbit an existing derelict satellite from approximately the same orbital plane.
It is an ITU requirement that geostationary satellites be able to remove themselves to a graveyard orbit at the end of their lives. It has been demonstrated that the selected orbital areas do not sufficiently protect GEO lanes from debris, although a response has not yet been formulated.
Rocket stages or satellites that retain enough propellant can power themselves into a decaying orbit. In cases when a direct (and controlled) de-orbit would require too much propellant, a satellite can be brought to an orbit where atmospheric drag would cause it to de-orbit after some years. Such a manoeuvre was successfully performed with the French Spot-1 satellite, bringing its time to atmospheric re-entry down from a projected 200 years to about 15 years by lowering its perigee from 830 km (516 mi) to about 550 km (342 mi).
Instead of using rockets, an electrodynamic tether can be attached to the spacecraft on launch. At the end of its lifetime it is rolled out and slows down the spacecraft. Although tethers of up to 30 km have been successfully deployed in orbit the technology has not yet reached maturity. It has been proposed that booster stages include a sail-like attachment to the same end.
External removal
A well-studied solution is to use a remotely controlled vehicle to rendezvous with debris, capture it, and return to a central station. The commercially developed MDA Space Infrastructure Servicing vehicle is a refuelling depot and service spacecraft for communication satellites in geosynchronous orbit, slated for launch in 2015. The SIS includes the vehicle capability to "push dead satellites into graveyard orbits." The Advanced Common Evolved Stage family of upper-stages is being explicitly designed to have the potential for high leftover propellant margins so that derelict capture/deorbit might be accomplished, as well as with in-space refuelling capability that could provide the high delta-V required to deorbit even heavy objects from geosynchronous orbits.
The laser broom uses a powerful ground-based laser to ablate the front surface off of debris and thereby produce a rocket-like thrust that slows the object. With a continued application the debris will eventually decrease their altitude enough to become subject to atmospheric drag. In the late 1990s, US Air Force worked on a ground-based laser broom design under the name "Project Orion". Although a test-bed device was scheduled to launch on a 2003 Space Shuttle, numerous international agreements, forbidding the testing of powerful lasers in orbit, caused the program to be limited to using the laser as a measurement device. In the end, the Space Shuttle Columbia disaster led to the project being postponed and, as Nicholas Johnson, Chief Scientist and Program Manager for NASA's Orbital Debris Program Office, later noted, "There are lots of little gotchas in the Orion final report. There's a reason why it's been sitting on the shelf for more than a decade."
Additionally, the momentum of the photons in the laser beam could be used to impart thrust on the debris directly. Although this thrust would be tiny, it may be enough to move small debris into new orbits that do not intersect those of working satellites. NASA research from 2011 indicates that firing a laser beam at a piece of space junk could impart an impulse of 0.04 in (1.0 mm) per second. Keeping the laser on the debris for a few hours per day could alter its course by 650 ft (200 m) per day. One of the drawbacks to these methods is the potential for material degradation. The impinging energy may break apart the debris, adding to the problem. A similar proposal replaces the laser with a beam of ions.
A number of other proposals use more novel solutions to the problem, from foamy ball of aerogel or spray of water, inflatable balloons, electrodynamic tethers, boom electroadhesion, or dedicated "interceptor satellites". On 7 January 2010, Star Inc. announced that it had won a contract from Navy/SPAWAR for a feasibility study of the application of the ElectroDynamic Debris Eliminator (EDDE). In February 2012, the Swiss Space Center at École Polytechnique Fédérale de Lausanne announced the Clean Space One project, a nanosat demonstration project for matching orbits with a defunct Swiss nanosat, capturing it, and deorbiting together.
As of 2006[update], the cost of launching any of these solutions is about the same as launching any spacecraft. Johnson stated that none of the existing solutions are currently cost-effective. Since that statement was made, a promising new approach has emerged. Space Sweeper with Sling-Sat (4S) is a grappling satellite mission that sequentially captures and ejects debris. The momentum from these interactions is used as a free impulse to the craft while transferring between targets. Thus far, 4S has proven to be a promising solution.
A consensus of speakers at a meeting held in Brussels on 30 October 2012, organized by the Secure World Foundation, a US think tank, and the French International Relations Institute, report that active removal of the most massive pieces of debris will be required to prevent the risks to spacecraft, crewed or not, becoming unacceptable in the foreseeable future, even without any further additions to the current inventory of dead spacecraft in LEO. However removal cost, together with legal questions surrounding the ownership rights and legal authority to remove even defunct satellites have stymied decisive national or international action to date, and as yet no firm plans exist for action to address the problem. Current space law retains ownership of all satellites with their original operators, even debris or spacecraft which are defunct or threaten currently active missions.
Debris producing events
See also
|Wikinews has related news: Out of space in outer space: Special report on NASA's 'space junk' plans|
- Derelict satellites
- Near-Earth object
- Liability Convention
- Orbital Debris Co-ordination Working Group
- Planetes, a critically and scientifically acclaimed Japanese Manga and Anime series exploring the concern with orbital debris and its impact on space development in the future of mankind's expansion into space.
- Spacecraft cemetery
- List of large reentering space debris
- The Threat of Orbital Debris and Protecting NASA Space Assets from Satellite Collisions (2009)
- Lisa Grossman, "NASA Considers Shooting Space Junk With Lasers", wired, 15 March 2011.
- Fred Whipple, "The Theory of Micro-Meteorites, Part I: In an Isothermal Atmosphere", Proceedings of the National Academy of Sciences, Volume 36 Number 12 (15 December 1950), pp. 667 – 695.
- Fred Whipple, "The Theory of Micrometeorites.", Popular Astronomy, Volume 57, 1949, p. 517.
- Whipple, Fred. "A Comet Model. II. Physical Relations for Comets and Meteors", Astrophysical Journal, Volume 113 (1951), pp. 464–474.
- D. E. Brownlee, D. A. Tomandl and E. Olszewski. "1977LPI.....8..145B Interplanetary dust: A new source of extraterrestrial material for laboratory studies", Proceedings of the 8th Lunar Scientific Conference, 1977, pp. 149–160.
- Hans Pettersson, "Cosmic Spherules and Meteoritic Dust." Scientific American, Volume 202 Issue 2 (February 1960), pp. 123–132.
- Andrew Snelling and David Rush, "Moon Dust and the Age of the Solar System" Creation Ex-Nihilo Technical Journal, Volume 7 Number 1 (1993), p. 2–42.
- Brian Marsden, "Professor Fred Whipple: Astronomer who developed the idea that comets are 'dirty snowballs'." The Independent, 13 November 2004.
- Fred Whipple, "Of Comets and Meteors" Science, Volume 289 Number 5480 (4 August 2000), p. 728.
- Judith Reustle (curator), " Sheild Development: Basic Concepts", NASA HVIT. Retrieved 20 July 2011.
- Ceramic Fabric Offers Space Age Protection, 1994 Hypervelocity Impact Symposium
- Kim Dismukes (curator), "TransHab Concept", NASA, 27 June 2003. Retrieved 10 June 2007.
- Donald Kessler, "Upper Limit on the Spatial Density of Asteroidal Debris" AIAA Journal, Volume 6 Number 12 (December 1968), p. 2450.
- Kessler 1971
- Ernst Öpik, "Collision probabilities with the planets and the distribution of interplanetary matter", Proceedings of the Royal Irish Academy of Sciences, Volume 54A (1951), pp. 165 – 199.
- G. W. Wetherill, "Collisions in the Asteroid Belt", Journal of Geophysical Research, Volume 72 Number 9 (1967), pp. 2429 – 2444
- Donald Kessler, "Derivation of the Collision Probability between Orbiting Objects: The Lifetimes of Jupiter's Outer Moons", Icarus, Volume 48 (1981), pp. 39 – 48.
- Felix Hoots, Paul Schumacher Jr. and Robert Glover, "History of Analytical Orbit Modeling in the U.S. Space Surveillance System." Journal of Guidance Control and Dynamics, Volume 27 Issue 2, pp. 174 – 185.
- T.S. Kelso, CelesTrak BBS: Historical Archives, 2-line elements dating to 1980
- Schefter, p. 48.
- David Portree and Joseph Loftus. "Orbital Debris: A Chronology", NASA, 1999, p. 13.
- Kessler 1978
- Kessler 2009
- Kessler 1991, p. 65.
- Kessler 1981
- Kessler 1991, p. 63.
- Technical, p. 4
- Schefter, p. 50.
- See charts, Hoffman p. 7.
- See chart, Hoffman p. 4.
- In the time between writing Chapter 1 (earlier) and the Prolog (later) of Space Debris, Klinkrad changed the number from 8,500 to 13,000 – compare p. 6 and ix.
- Michael Hoffman, "It's getting crowded up there." Space News, 3 April 2009.
- "Space Junk Threat Will Grow for Astronauts and Satellites", Fox News, 6 April 2011.
- Kessler 2001
- Jan Stupl et al, "Debris-debris collision avoidance using medium power ground-based lasers", 2010 Beijing Orbital Debris Mitigation Workshop, 18–19 October 2010, see graph p. 4
- J.-C Liou and N. L. Johnson, "Risks in Space from Orbiting Debris", Science, Volume 311 Number 5759 (20 January 2006), pp. 340 – 341
- Stefan Lovgren, "Space Junk Cleanup Needed, NASA Experts Warn." National Geographic News, 19 January 2006.
- Antony Milne, Sky Static: The Space Debris Crisis, Greenwood Publishing Group, 2002, ISBN 0-275-97749-8, p. 86.
- Technical, p. 7.
- Paul Marks, "Space debris threat to future launches", 27 October 2009.
- Space junk at tipping point, says report, BBC News, 2 September
- "Technical report on space debris", United Nations, New York, 1999.
- "Orbital Debris FAQ: How much orbital debris is currently in Earth orbit?" NASA, July 2009. Retrieved 11 July 2011
- Joseph Carroll, "Space Transport Development Using Orbital Debris", NASA Institute for Advanced Concepts, 2 December 2002, p. 3.
- Robin McKie and Michael Day, "Warning of catastrophe from mass of 'space junk'" The Observer, 24 February 2008.
- Matt Ford, "Orbiting space junk heightens risk of satellite catastrophes." Ars Technica, 27 February 2009.
- "What are hypervelocity impacts?" ESA, 19 February 2009.
- Klinkrad, p. 7.
- Kessler 1991, p. 268.
- "Colocation Strategy and Collision Avoidance for the Geostationary Satellites at 19 Degrees West." CNES Symposium on Space Dynamics, 6–10 November 1989.
- J. C. van der Ha and M. Hechler, "The Collision Probability of Geostationary Satellites" 32nd International Astronautical Congress, 1981, p. 23.
- L. Anselmo and C. Pardini, "Collision Risk Mitigation in Geostationary Orbit", Space Debris, Volume 2 Number 2 (June 2000), pp. 67 – 82. doi:10.1023/A:1021255523174
- Orbital debris, p. 86.
- Orbital debris, p. 152.
- "The Olympus failure" ESA press release, 26 August 1993.
- "Notification for Express-AM11 satellite users in connection with the spacecraft failure" Russian Satellite Communications Company, 19 April 2006.
- Julian Smith, "Space Junk"[dead link] USA Weekend, 26 August 2007.
- "UCS Satellite Database" Union of Concerned Scientists, 16 July 2009.
- C. Wiedemann et al, "Size distribution of NaK droplets for MASTER-2009", Proceedings of the 5th European Conference on Space Debris, 30 March-2 April 2009, (ESA SP-672, July 2009).
- A. Rossi et al, "Effects of the RORSAT NaK Drops on the Long Term Evolution of the Space Debris Population"[dead link], University of Pisa, 1997.
- See image here.
- In some cases they return to the ground intact, see this list for examples.
- Phillip Anz-Meador and Mark Matney, "An assessment of the NASA explosion fragmentation model to 1 mm characteristic sizes" Advances in Space Research, Volume 34 Issue 5 (2004), pp. 987 – 992.
- "Debris from explosion of Chinese rocket detected by University of Chicago satellite instrument", University of Chicago press release, 10 August 2000.
- "Rocket Explosion", Spaceweather.com, 22 February 2007. Retrieved 21 February 2007.
- Ker Than, "Rocket Explodes Over Australia, Showers Space with Debris" Space.com, 21 February 2007. Retrieved 21 February 2007.
- "Recent Debris Events" celestrak.com, 16 March 2007. Retrieved 14 July 2001.
- Jeff Hecht, "Spate of rocket breakups creates new space junk", NewScientist, 17 January 2007. Retrieved 16 March 2007.
- "Proton Launch Failure 2012 Aug 6". Zarya. 21 October 2012. Retrieved 21 October 2012.
- Clayton Chun, "Shooting Down a Star: America's Thor Program 437, Nuclear ASAT, and Copycat Killers", Maxwell AFB Base, AL: Air University Press, 1999. ISBN 1-58566-071-X.
- David Wright, "Debris in Brief: Space Debris from Anti-Satellite Weapons" Union of Concerned Scientists, December 2007.
- Leonard David, "China's Anti-Satellite Test: Worrisome Debris Cloud Circles Earth" space.com, 2 February 2007.
- "Fengyun 1C – Orbit Data" Heavens Above.
- Brian Burger, "NASA's Terra Satellite Moved to Avoid Chinese ASAT Debris", space.com. Retrieved 6 July 2007.
- "Pentagon: Missile Scored Direct Hit on Satellite.", npr.org, 21 February 2008.
- Jim Wolf, "US satellite shootdown debris said gone from space", Reuters, 27 February 2008.
- Y. Akahoshi et al. "Influence of space debris impact on solar array under power generation." International Journal of Impact Engineering, Volume 35, Issue 12, December 2008, pp 1678–1682. doi:10.1016/j.ijimpeng.2008.07.048
- V.M. Smirnov et al, "Study of Micrometeoroid and Orbital Debris Effects on the Solar Panels Retrieved from the Space Station 'MIR'", Space Debris, Volume 2 Number 1 (March 2000), pp. 1 – 7. doi:10.1023/A:1015607813420
- "Orbital Debris FAQ: How did the Mir space station fare during its 15-year stay in Earth orbit?", NASA, July 2009.
- Phillip Clark, "Space Debris Incidents Involving Soviet/Russian Launches", Molniya Space Consultancy, friends-partners.org.
- Becky Iannotta and Tariq Malik, "U.S. Satellite Destroyed in Space Collision", space.com, 11 February 2009
- Paul Marks, "Satellite collision 'more powerful than China's ASAT test", New Scientist, 13 February 2009.
- Becky Iannotta, "U.S. Satellite Destroyed in Space Collision", space.com, 11 February 2009. Retrieved 11 February 2009.
- "2 big satellites collide 500 miles over Siberia." yahoo.com, 11 February 2009. Retrieved 11 February 2009.
- Leonard David. "Russian Satellite Hit by Debris from Chinese Anti-Satellite Test". space.com.
- Rob Matson, "Satellite Encounters" Visual Satellite Observer's Home Page.
- "STS-48 Space Shuttle Mission Report", NASA, NASA-CR-193060, October 1991.
- Christiansen, E. L., J. L. Hydeb and R. P. Bernhard. "Space Shuttle debris and meteoroid impacts." Advances in Space Research, Volume 34 Issue 5 (May 2004), pp. 1097–1103. doi:10.1016/j.asr.2003.12.008
- Kelly, John. "Debris is Shuttle's Biggest Threat", space.com, 5 March 2005.
- "Debris Danger." Aviation Week & Space Technology, Volume 169 Number 10 (15 September 2008), p. 18.
- William Harwood, "Improved odds ease NASA's concerns about space debris", CBS News, 16 April 2009.
- D. Lear et al, "Investigation of Shuttle Radiator Micro-Meteoroid & Orbital Debris Damage", Proceedings of the 50th Structures, Structural Dynamics, and Materials Conference, 4–7 May 2009, AIAA 2009–2361.
- D. Lear, et al, "STS-118 Radiator Impact Damage", NASA
- K Thoma et al, "New Protection Concepts for Meteoroid / Debris Shields", Proceedings of the 4th European Conference on Space Debris (ESA SP-587), 18–20 April 2005, p. 445.
- Henry Nahra, "Effect of Micrometeoroid and Space Debris Impacts on the Space Station Freedom Solar Array Surfaces" Presented at the 1989 Spring Meeting of the Materials Research Society, 24–29 April 1989, NASA TR-102287.
- "Junk alert for space station crew", BBC News, 12 March 2009.
- "International Space Station in debris scare", BBC News, 28 June 2011.
- Haines, Lester. "ISS spared space junk avoidance manoeuvre", The Register, 17 March 2009.
- Bechara J. Saab, "Planet Earth, Space Debris", Hypothesis Volume 7 Issue 1 (September 2009).
- "NASA – Part I – The History of Skylab." NASA's Marshall Space Flight Center and Kennedy Space Center, 16 March 2009.
- "NASA – John F. Kennedy Space Center Story." NASA Kennedy Space Center, 16 March 2009.
- "PAM-D Debris Falls in Saudi Arabia", The Orbital Debris Quarterly News, Volume 6 Issue 2 (April 2001).
- "Debris Photos" NASA.
- "Debris Warning" NASA.
- Jano Gibson, "Jet's flaming space junk scare", The Sydney Morning Herald, 28 March 2007.
- "Space junk falls around airliner", AFP, 28 March 2007
- U.S. Congress, Office of Technology Assessment, "Orbiting Debris: A Space Environmental Problem", Background Paper, OTA-BP-ISC-72, U.S. Government Printing Office, September 1990, p. 3
- "Today in Science History" todayinsci.com. Retrieved 8 March 2006.
- Tony Long, "Jan. 22, 1997: Heads Up, Lottie! It's Space Junk!", wired, 22 January 2009.
- D. Mehrholz et al, "Detecting, Tracking and Imaging Space Debris", ESA bulletin 109, February 2002.
- Ben Greene, "Laser Tracking of Space Debris", Electro Optic Systems Pty
- "Orbital debris: Optical Measurements", NASA Orbital Debris Program Office
- Grant Stokes et al, "The Space-Based Visible Program", MIT Lincoln Laboratory. Retrieved 8 March 2006.
- H. Klinkrad, "Monitoring Space – Efforts Made by European Countries", fas.org. Retrieved 8 March 2006.
- "MIT Haystack Observatory" haystack.mit.edu. Retrieved 8 March 2006.
- "AN/FPS-108 COBRA DANE." fas.org. Retrieved 8 March 2006.
- Darius Nikanpour, "Space Debris Mitigation Technologies", Proceedings of the Space Debris Congress, 7–9 May 2009.
- MEEP, NASA, 4 April 2002. Retrieved 8 July 2011
- "STS-76 Mir Environmental Effects Payload (MEEP)", NASA, March 1996. Retrieved 8 March 2011.
- David Whitlock, "History of On-Orbit Satellite Fragmentations", NASA JSC, 2004 Note: "Gabbard diagrams of the early debris cloud prior to the effects of perturbations, if the data were available, are reconstructed. These diagrams often include uncataloged as well as cataloged debris data. When used correctly, Gabbard diagrams can provide important insights into the features of the fragmentation."
- Johnson, Nicholas (5 December 2011). "Space debris issues". audio file, @0:05:50-0:07:40. The Space Show. Retrieved 8 December 2011.
- "USA Space Debris Envinronment, Operations, and Policy Updates". NASA. UNOOSA. Retrieved 1 October 2011.
- Johnson, Nicholas (5 December 2011). "Space debris issues". audio file, @1:03:05-1:06:20. The Space Show. Retrieved 8 December 2011.
- "UN Space Debris Mitigation Guidelines", UN Office for Outer Space Affairs, 2010.
- Theresa Hitchens, "COPUOS Wades into the Next Great Space Debate", The Bulletin of the Atomic Scientists, 26 June 2008.
- "Orbital Debris – Important Reference Documents.", NASA Orbital Debris Program Office.
- E A Taylor and J R Davey, "Implementation of debris mitigation using International Organization for Standardization (ISO) standards", Proceedings of the Institution of Mechanical Engineers: G, Volume 221 Number 8 (1 June 2007), pp. 987 – 996.
- Frank Zegler and Bernard Kutter, "Evolving to a Depot-Based Space Transportation Architecture", AIAA SPACE 2010 Conference & Exposition, 30 August-2 September 2010, AIAA 2010–8638.
- Robotic refueling Mission
- Luc Moliner, "Spot-1 Earth Observation Satellite Deorbitation", AIAA, 2002.
- "Spacecraft: Spot 3", agi, 2003
- Bill Christensen, "The Terminator Tether Aims to Clean Up Low Earth Orbit", space.com. Retrieved 8 March 2006.
- Jonathan Amos, "How satellites could 'sail' home", BBC News, 3 May 2009.
- Erika Carlson et al, "Final design of a space debris removal system", NASA/CR-189976, 1990.
- "Intelsat Picks MacDonald, Dettwiler and Associates Ltd. for Satellite Servicing", CNW Newswire, 15 March 2011. Retrieved 15 July 2011.
- Peter de Selding, "MDA Designing In-orbit Servicing Spacecraft", Space News, 3 March 2010. Retrieved 15 July 2011.
- Jonathan Campbell, "Using Lasers in Space: Laser Orbital Debris Removal and Asteroid Deflection", Occasional Paper No. 20, Air University, Maxwell Air Force Base, December 2000.
- Mann, Adam (26 October 2011). "Space Junk Crisis: Time to Bring in the Lasers". Wired Science. Retrieved 1 November 2011.
- Ivan Bekey, "Project Orion: Orbital Debris Removal Using Ground-Based Sensors and Lasers.", Second European Conference on Space Debris, 1997, ESA-SP 393, p. 699.
- Justin Mullins "A clean sweep: NASA plans to carry out a spot of housework.", New Scientist, 16 August 2000.
- Tony Reichhardt, "Satellite Smashers", Air & Space Magazine, 1 March 2008.
- James Mason et al, "Orbital Debris-Debris Collision Avoidance", arXiv:1103.1690v2, 9 March 2011.
- C. Bombardelli and J. Peláez, «Ion Beam Shepherd for Contactless Space Debris Removal », Journal of Guidance, Control, and Dynamics, Vol. 34, No. 3, May–June 2011, pp 916–920. http://sdg.aero.upm.es/PUBLICATIONS/PDF/2011/AIAA-51832-628.pdf
- Daniel Michaels, "A Cosmic Question: How to Get Rid Of All That Orbiting Space Junk?", Wall Street Journal, 11 March 2009.
- "Company floats giant balloon concept as solution to space mess", Global Aerospace Corp press release, 4 August 2010.
- "Space Debris Removal", Star-tech-inc.com. Retrieved 18 July 2011.
- Foust, Jeff (5 October 2011). "A Sticky Solution for Grabbing Objects in Space". MIT Technology Review. Retrieved 7 October 2011.
- Jason Palmer, "Space junk could be tackled by housekeeping spacecraft ", BBC News, 8 August 2011
- "News", Star Inc. Retrieved 18 July 2011.
- "Cleaning up Earth's orbit: A Swiss satellite tackles space junk". EPFL. February 15, 2012. Retrieved 2013-04-03.
- Jan, McHarg (August 10, 2012). "Project aims to remove space debris". Phys.org. Retrieved 2013-04-03.
- "Experts: Active Removal Key To Countering Space Junk Threat" Peter B. de Selding, Space.com 31 October 2012.
- Donald Kessler (Kessler 1991), "Collisional Cascading: The Limits of Population Growth in Low Earth Orbit", Advances in Space Research, Volume 11 Number 12 (December 1991), pp. 63 – 66.
- Donald Kessler (Kessler 1971), "Estimate of Particle Densities and Collision Danger for Spacecraft Moving Through the Asteroid Belt", Physical Studies of Minor Planets, NASA SP-267, 1971, pp. 595 – 605. Bibcode 1971NASSP.267..595K.
- Donald Kessler (Kessler 2009), "The Kessler Syndrome" webpages.charter.net, 8 March 2009.
- Donald Kessler (Kessler 1981), "Sources of Orbital Debris and the Projected Environment for Future Spacecraft", Journal of Spacecraft, Volume 16 Number 4 (July–August 1981), pp. 357 – 360.
- Donald Kessler and Burton Cour-Palais (Kessler 1978), "Collision Frequency of Artificial Satellites: The Creation of a Debris Belt" Journal of Geophysical Research, Volume 81, Number A6 (1 June 1978), pp. 2637–2646.
- Donald Kessler and Phillip Anz-Meador, "Critical Number of Spacecraft in Low Earth Orbit: Using Fragmentation Data to Evaluate the Stability of the Orbital Debris Environment", Presented and the Third European Conference on Space Debris, March 2001.
- Heiner Klinkrad, "Space Debris: Models and Risk Analysis", Springer-Praxis, 2006, ISBN 3-540-25448-X.
- (Technical), "Orbital Debris: A Technical Assessment" National Academy of Sciences, 1995. ISBN 0-309-05125-8.
- Jim Schefter, "The Growing Peril of Space Debris" Popular Science, July 1982, pp. 48 – 51.
Further reading
- "What is Orbital Debris?", Center for Orbital and Reentry Debris Studies, Aerospace Corporation
- Committee for the Assessment of NASA's Orbital Debris Programs (2011). Limiting Future Collision Risk to Spacecraft: An Assessment of NASA's Meteoroid and Orbital Debris Programs. Washington, D.C.: National Research Council. ISBN 978-0-309-21974-7.
- "Space junk reaching 'tipping point,' report warns". Reuters. 1 September 2011. Retrieved 2 September 2011. A news item summarizing the above report.
- David Leonard, "The Clutter Above", Bulletin of the Atomic Scientists, July/August 2005.
- Patrick McDaniel, "A Methodology for Estimating the Uncertainty in the Predicted Annual Risk to Orbiting Spacecraft from Current or Predicted Space Debris Population". National Defense University, 1997.
- "Interagency Report on Orbital Debris, 1995", National Science and Technology Council, November 1995.
- Nickolay Smirnov, Space Debris: Hazard Evaluation and Mitigation. Boca Raton, FL: CRC Press, 2002, ISBN 0-415-27907-0.
- Richard Talcott, "How We Junked Up Outer Space", Astronomy, Volume 36, Issue 6 (June 2008), pp. 40–43.
- "Technical report on space debris, 1999", United Nations, 2006. ISBN 92-1-100813-1.
|Wikimedia Commons has media related to: Space debris|
- NASA Orbital Debris Program Office
- ESA Space Debris Office
- "Space: the final junkyard", documentary film
- Would a Saturn-like ring system around planet Earth remain stable?
- EISACT Space Debris during the international polar year
- Intro to mathematical modeling of space debris flux
- SOCRATES: A free daily service predicting close encounters on orbit between satellites and debris orbiting Earth
- A summary of current space debris by type and orbit
- Space Junk Astronomy Cast episode No. 82, includes full transcript
- Paul Maley's Satellite Page – Space debris (with photos)
- Space Debris Illustrated: The Problem in Pictures
- PACA: Space Debris
- IEEE – The Growing Threat of Space Debris
- The Threat of Orbital Debris and Protecting NASA Space Assets from Satellite Collisions
-
- Space Age Wasteland: Debris in Orbit Is Here to Stay; Scientific American; 2012 | http://en.wikipedia.org/wiki/Space_debris | 13 |
73 | You can use the Geogebra Program to do this assignment. Click here to download it. Use the Download button unless you already have java installed on your computer. You should use the template here to enter your answers in. Just paste your images from Geogebra into the boxes.
You can also just use a compass and straightedge. You will probably want to copy your work at each stage.
Part I: Triangles.
Part II. Squares.
Mark the points where the perpendicular lines meet the two circle at the top and connect the four points to complete the square. you can hide the perpendicular lines.
3. Mark the midpoints if the bottom and top of the square, using the center line of the vesica pisces. You may now erase everything but the square. Save at this point, or make a copy. You will need this again later.
Find the midpoints of the two sides of the square. You will have to use the line bisector in Geogebra. If doing it by hand you can measure to find an approximate midpoint, or, better, use these instructions to bisect the lines manually.
Now connect the diagonal points of the square. Make a smaller square by connecting the four midpoints of the lines that make the original square. Now connect the diagonals of this smaller square. Where have you seen this diagram before?
4. Make a smaller square inside the second square by connecting its midpoints (marked by the diagonals of the original square) in the same manner as above. Now make a fourth larger square outside the original square. Extend the two midpoint lines of the original square, and construct a line at the top left corner that is parallel to the diagonal until that line meets the extended midpoint lines. Repeat for the other three corners.
How is each square related to the diagonal of the next smaller square? What are the relationships between the sizes of the four squares?
Start with a square again, from step 3 above. Draw the diagonal from point A to D. Draw the lines perpendicular to that diagonal AD at points A and D. Then draw a Vesica Pisces with a circle center A, radius AD and a circle center D, radius DA.
Mark where the circles intersect the perpendicular lines and complete the square built on the diagonal AD, by making the segments DF, FE, and EA.
6. Now make another diagonal CB in the original square. and repeat all the steps from number 5 for this diagonal to create another square based upon it, CBHG.
How many squares do you see?
Indicate the relationship between the areas and the sides of the different size squares.
|Unit square ABDC||1||1|
III. The Golden Section and the Pentagon:
1. Construct a Golden rectangle.
Start with a square. Use the regular polygon tool or start from a copy of the square constructed above.
Bisect the bottom of the square and then continue that bottom segment in both directions.
Draw a circle with center E and radius ED to intersect the bottom line. You are inscribing the square in a semi-circle. Mark the point where the circle hits the line F. The line AF is cut by B in the golden section.
Mark the other point where the circle hits the bottom line G. Extend CD in both directions. Raise perpendiculars up at F and G. Mark the two points where these hit line CD, H and I. GHIF is a Square Root of 5 rectangle and ACIF and GHDB are Golden Rectangles.
2. The Golden Spiral.
Start with a Golden rectangle ACIF above. Note that BDIF is also a golden rectangle.
Measure out on DB and IF a length equal to DI. Mark these points J and K. Make the square JDIK
BJKF is also a Golden Rectangle.
Repeat the procedure a couple more times.
Draw an arc from A to D with radius BA. Then do the same with the next smaller square: an arc from D to K with radius JD.
Continue with the next smaller square and so on as far down as you can get. This is the Logarithmic or golden spiral.
is the Euklid file.
Start with a line divided in a Golden section, such as ABF from above. You can also reconstruct one using the square root of 5 rectangle method from above.
Draw circle with center A and radius AB and another circle with center B and radius BA.
Now draw a circle with center A and radius AF. Then another circle with center B with the same radius AF. (You will have to measure AF and use the circle with determined radius function in Euklid) Mark the points where the two large circles intersect each other and the two small circles. Connect each of these points with each other and with AB to make the pentagon.
4. Pentagram Star.
Start with the pentagon. You may erase all the guidelines. Connect each vertex with the one directly opposite it. This will give you a pentagram star inside the pentagon.
You can repeat this process again
within the internal
Extend each of the sides of the original pentagon to make a larger pentagram outside.
How many instances of the golden relationship can you find between the parts of the pentagram?
IV. The Platonic Solids:
Start with a vesica pisces divided into 4 triangles as in part I above:
Remove the circles and fold you have a tetrahedron.
Extend the vesica pisces to six circles and use it to trace out these 6 squares:
The same pattern with 5 circles will give the octahedron:
Use the same pattern with 8 circles for the icosahedron:
Just hand in the drawn or printed templates.
Extra Credit ONLY
You can cut them out and actually construct the solids for extra credit. If you like , you can print out these already drawn templates to make your models.
Extra Credit ONLYMaking a gauge.
1. You will need two straight thin objects from 10 to 20" long that can be punctured or drilled in the middle. Longer is better. A long stick like a kabob stick or chop stick is best, but in a pinch a long sturdy plastic straw will do.
2. You will find the golden section point of both sticks and mark it. You can mark the length of the stick on a piece of paper and then divide that line according to the golden section using the methods Here or Here. (The second is the simplest). Then once you find the golden section point on the line, you can mark the spot on your sticks.
You could also measure the stick and calculate the length .6180 of the original length and then measure and mark that length. (This is less accurate, but works well enough for the degree of precision you can achieve with these materials. Measuring and marking very carefully will make a difference.
You could also try drawing a line the same length as your stick in Euklid, dividing it according to one of the methods above,and then either using the measurements or printing it out as a template to mark the stick. If you print you will have to be careful to print the actual size.
3. Make a hole in each stick at the point you marked. You can make this with a pin, a thumbtack, a nail or a drill. Please do not poke, puncture, impale, or otherwise injure yourself. See me if you can't figure out how to do this, or ask an adult for help ;)
4. You will need to attach the two sticks at the holes with a pin, nail, paperclip, or rivet. Make sure that the sticks move, but are not loose. You will want the sticks to hold their position and not swing freely. You should make the hole somewhat smaller than the object you will use to hold them together so the fit is snug. (For example, make the hole in a straw with a pin, but use a paper clip to hold them together.) Make sure to trim any sharp or dangerous ends.
5. Trim the ends of your sticks to make sure they are the same length. You may also trim the ends of straws to points or sharpen sticks in a pencil sharpener to make them more precise pointers.
6. You are done. No matter how you move the sticks the two ends will measure out lengths that are in the golden proportion. You must hand in your gauge with the assignment in a plastic bag with your name on it.
1. Measure at least 5 commonly used man made objects. Can you find at least one that has proportions in the golden section?
2. Look at some pictures of art works or architecture. Find at least 5 that have prominent features in the golden section.List their names and the features measured. You might also find this template useful in looking at larger objects. You may find this table helpful in reporting your results
|Name||Feature||Artist (if known)||Web address (if from web)||Image
3. Look at some natural objects and parts of natural objects (any living thing, part of a living thing, or highly organized inorganic object, such as a crystal. Don't use anything that has been reshaped by Man, such a cut gem or carved wood.) Find and list at least 8 examples (at least 4 should be from different objects, not parts of the same object) of the golden section. Did you find any organized natural objects whose main divisions were not in the golden proportion? List them. You may find this table helpful in reporting your results
|Object||Part in golden proportion||Image or drawing (optional)||web address:(if from web)|
© 2006 David Banach
This work is licensed under a Creative Commons License. | http://www.anselm.edu/homepage/dbanach/31-drawing.htm | 13 |
115 | Temperature is a physical quantity that is a measure of hotness and coldness on a numerical scale. It is a measure of the thermal energy per particle of matter or radiation; it is measured by a thermometer, which may be calibrated in any of various temperature scales, Celsius, Fahrenheit, Kelvin, etc.
Temperature is an intensive property, which means it is independent of the amount of material present; in contrast to energy, an extensive property, which is proportional to the amount of material in the system. For example, a lightening bolt can heat a small portion of the atmosphere hotter than the surface of the sun.
Empirically it is found that an isolated system, one that exchanges no energy or material with its environment, tends to a spatially uniform temperature as time passes. When a path permeable only to heat is open between two bodies, energy always transfers spontaneously as heat from a hotter body to a colder one. The transfer rate depends on the thermal conductivity of the path or boundary between them. Between two bodies with the same temperature no heat flows. These bodies are said to be in thermal equilibrium.
In kinetic theory and in statistical mechanics, temperature is the effect of the thermal energy arising from the motion of microscopic particles such as atoms, molecules and photons. The relation is proportional as given by the Boltzmann constant.
The lowest theoretical temperature is called absolute zero. However, it cannot be achieved in any actual physical device. It is denoted by 0 K on the Kelvin scale, −273.15 °C on the Celsius scale. In matter at absolute zero, the motions of microscopic constituents are minimal; moreover their kinetic energies are also minimal.
Use in science
||This section needs additional citations for verification. (January 2013)|
Many things depend on temperature, such as
- physical properties of materials including the phase (solid, liquid, gaseous or plasma), density, solubility, vapor pressure, electrical conductivity
- rate and extent to which chemical reactions occur
- the amount and properties of thermal radiation emitted from the surface of an object
- speed of sound is a function of the square root of the absolute temperature
Much of the world uses the Celsius scale (°C) for most temperature measurements. It has the same incremental scaling as the Kelvin scale used by scientists, but fixes its null point, at 0°C = 273.15K, approximately the freezing point of water (at one atmosphere of pressure).[note 1] The United States uses the Fahrenheit scale for common purposes, a scale on which water freezes at 32 °F and boils at 212 °F (at one atmosphere of pressure).
For practical purposes of scientific temperature measurement, the International System of Units (SI) defines a scale and unit for the thermodynamic temperature by using the easily reproducible temperature of the triple point of water as a second reference point. The reason for this choice is that, unlike the freezing and boiling point temperatures, the temperature at the triple point is independent of pressure (since the triple point is a fixed point on a two-dimensional plot of pressure vs. temperature). For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment, which has been named the kelvin in honor of the Scottish physicist who first defined the scale. The unit symbol of the kelvin is K.
Absolute zero is defined as a temperature of precisely 0 kelvins, which is equal to −273.15 °C or −459.67 °F.
Thermodynamic approach to temperature
Temperature is one of the principal quantities in the study of thermodynamics. Thermodynamics investigates the relation between heat and work, using the absolute temperature scale.
The thermodynamic definition of temperature is due to Kelvin.
It is framed in terms of an idealized device called a Carnot engine, imagined to define a continuous cycle of states of its working body. The cycle is imagined to run so slowly that at each point of the cycle the working body is in a state of thermodynamic equilibrium. There are four limbs in such a Carnot cycle. The engine consists of four bodies. The main one is called the working body. Two of them are called heat reservoirs, so large that their respective non-deformation variables are not changed by transfer of energy as heat through a wall permeable only to heat to the working body. The fourth body is able to exchange energy with the working body only through adiabatic work; it may be called the work reservoir. The substances and states of the two heat reservoirs should be chosen so that they are not in thermal equilibrium with one another. This means that they must be at different fixed temperatures, one, labeled here with the number 1, hotter than the other, labeled here with the number 2. This can be tested by connecting the heat reservoirs successively to an auxiliary thermometric body, which is required to show changes in opposite senses to its non-deformation variable, and which is composed of a material that has a strictly monotonic relation to the amount of work done on it in an isochoric adiabatic process. Typically, such a material expands as the surrounds do isochoric work on it. In order to settle the structure and sense of operation of the Carnot cycle, it is convenient to use such a material also for the working body; because most materials are of this kind, this is hardly a restriction of the generality of this definition. The Carnot cycle is considered to start from an initial condition of the working body that was reached by the completion of a reversible adiabatic compression. From there, the working body is initially connected by a wall permeable only to heat to the heat reservoir number 1, so that during the first limb of the cycle it expands and does work on the work reservoir. The second limb of the cycle sees the working body expand adiabatically and reversibly, with no energy exchanged as heat, but more energy being transferred as work to the work reservoir. The third limb of the cycle sees the working body connected, through a wall permeable only to heat, to the heat reservoir 2, contracting and accepting energy as work from the work reservoir. The cycle is closed by reversible adiabatic compression of the working body, with no energy transferred as heat, but energy being transferred to it as work from the work reservoir.
With this set-up, the four limbs of the reversible Carnot cycle are characterized by amounts of energy transferred, as work from the working body to the work reservoir, and as heat from the heat reservoirs to the working body. The amounts of energy transferred as heat from the heat reservoirs are measured through the changes in the non-deformation variable of the working body, with reference to the previously known properties of that body, the amounts of work done on the work reservoir, and the first law of thermodynamics. The amounts of energy transferred as heat respectively from reservoir 1 and from reservoir 2 may then be denoted respectively Q1 and Q2. Then the absolute or thermodynamic temperatures, T1 and T1, of the reservoirs are defined so that to be such that
Kelvin's original work postulating absolute temperature was published in 1848. It was based on the work of Carnot, before the formulation of the first law of thermodynamics. Kelvin wrote in his 1848 paper that his scale was absolute in the sense that was defined "independently of the properties of any particular kind of matter." His definitive publication, which sets out the definition just stated, was printed in 1853, a paper read in 1851.
This definition rests on the physical assumption that there are readily available walls permeable only to heat. In his detailed definition of a wall permeable only to heat, Carathéodory includes several ideas. The non-deformation state variable of a closed system is represented as a real number. A state of thermal equilibrium between two closed systems connected by a wall permeable only to heat means that a certain mathematical relation holds between the state variables, including the respective non-deformation variables, of those two systems (that particular mathematical relation is regarded by Buchdahl as a preferred statement of the zeroth law of thermodynamics). Also, referring to thermal contact equilibrium, "whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, the systems S1 and S2 are in mutual equilibrium." It may viewed as a re-statement of the principle stated by Maxwell in the words: "All heat is of the same kind." This physical idea is also expressed by Bailyn as a possible version of the zeroth law of thermodynamics: "All diathermal walls are equivalent." Thus the present definition of thermodynamic temperature rests on the zeroth law of thermodynamics. Explicitly, this present definition of thermodynamic temperature also rests on the first law of thermodynamics, for the determination of amounts of energy transferred as heat.
Implicitly for this definition, the second law of thermodynamics provides information that establishes the virtuous character of the temperature so defined. It provides that any working substance that complies with the requirement stated in this definition will lead to the same ratio of thermodynamic temperatures, which in this sense is universal, or absolute. The second law of thermodynamics also provides that the thermodynamic temperature defined in this way is positive, because this definition requires that the heat reservoirs not be in thermal equilibrium with one another, and the cycle can be imagined to operate only in one sense if net work is to be supplied to the work reservoir.
Numerical details are settled by making one of the heat reservoirs a cell at the triple point of water, which is defined to have an absolute temperature of 273.16 K. The zeroth law of thermodynamics allows this definition to be used to measure the absolute or thermodynamic temperature of an arbitrary body of interest, by making the other heat reservoir have the same temperature as the body of interest.
Temperature an intensive variable
In thermodynamic terms, temperature is an intensive variable because it is equal to a differential coefficient of one extensive variable with respect to another, for a given body. It thus has the dimensions of a ratio of two extensive variables. In thermodynamics, two bodies are often considered as connected by contact with a common wall, which has some specific permeability properties. Such specific permeability can be referred to a specific intensive variable. An example is a diathermic wall that is permeable only to heat; the intensive variable for this case is temperature. When the two bodies have been in contact for a very long time, and have settled to a permanent steady state, the relevant intensive variables are equal in the two bodies; for a diathermal wall, this statement is sometimes called the zeroth law of thermodynamics.
In particular, when the body is described by stating its internal energy U, an extensive variable, as a function of its entropy S, also an extensive variable, and other state variables V, N, with U = U (S, V, N), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy:
Likewise, when the body is described by stating its entropy S as a function of its internal energy U, and other state variables V, N, with S = S (U, V, N), then the reciprocal of the temperature is equal to the partial derivative of the entropy with respect to the internal energy:
The above definition, equation (1), of the absolute temperature is due to Kelvin. It refers to systems closed to transfer of matter, and has special emphasis on directly experimental procedures. A presentation of thermodynamics by Gibbs starts at a more abstract level and deals with systems open to the transfer of matter; in this development of thermodynamics, the equations (2) and (3) above are actually alternative definitions of temperature.
Temperature local when local thermodynamic equilibrium prevails
Real world bodies are often not in thermodynamic equilibrium and not homogeneous. For study by methods of classical irreversible thermodynamics, a body is usually spatially and temporally divided conceptually into 'cells' of small size. If classical thermodynamic equilibrium conditions for matter are fulfilled to good approximation in such a 'cell', then it is homogeneous and a temperature exists for it. If this is so for every 'cell' of the body, then local thermodynamic equilibrium is said to prevail throughout the body.
It makes good sense, for example, to say of the extensive variable U, or of the extensive variable S, that it has a density per unit volume, or a quantity per unit mass of the system, but it makes no sense to speak of density of temperature per unit volume or quantity of temperature per unit mass of the system. On the other hand, it makes no sense to speak of the internal energy at a point, while when local thermodynamic equilibrium prevails, it makes good sense to speak of the temperature at a point. Consequently, temperature can vary from point to point in a medium that is not in global thermodynamic equilibrium, but in which there is local thermodynamic equilibrium.
Thus, when local thermodynamic equilibrium prevails in a body, temperature can be regarded as a spatially varying local property in that body, and this is because temperature is an intensive variable.
Statistical mechanics approach to temperature
Statistical mechanics provides a microscopic explanation of temperature, based on macroscopic systems' being composed of many particles, such as molecules and ions of various species, the particles of a species being all alike. It explains macroscopic phenomena in terms of the mechanics of the molecules and ions, and statistical assessments of their joint adventures. In the statistical thermodynamic approach, by the equipartition theorem each classical degree of freedom that the particle has will have an average energy of kT/2 where k is Boltzmann's constant. The translational motion of the particle has three degrees of freedom, so that, except at very low temperatures where quantum effects predominate, the average translational energy of a particle in an system with temperature T will be 3kT/2.
On the molecular level, temperature is the result of the motion of the particles that constitute the material. Moving particles carry kinetic energy. Temperature increases as this motion and the kinetic energy increase. The motion may be the translational motion of particles, or the energy of the particle due to molecular vibration or the excitation of an electron energy level. Although very specialized laboratory equipment is required to directly detect the translational thermal motions, thermal collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The thermal motions of atoms are very fast and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting low temperature of 700 nK (1 nK = 10−9 K) in 1994, they used laser equipment to create an optical lattice to adiabatically cool caesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7mm per second in order to calculate their temperature.
Molecules, such as oxygen (O2), have more degrees of freedom than single spherical atoms: they undergo rotational and vibrational motions as well as translations. Heating results in an increase in temperature due to an increase in the average translational energy of the molecules. Heating will also cause, through equipartitioning, the energy associated with vibrational and rotational modes to increase. Thus a diatomic gas will require a higher energy input to increase its temperature by a certain amount, i.e. it will have a higher heat capacity than a monatomic gas.
The process of cooling involves removing thermal energy from a system. When no more energy can be removed, the system is at absolute zero, which cannot be achieved experimentally. Absolute zero is the null point of the thermodynamic temperature scale, also called absolute temperature. If it were possible to cool a system to absolute zero, all motion of the particles comprising matter would cease and they would be at complete rest in this classical sense. Microscopically in the description of quantum mechanics, however, matter still has zero-point energy even at absolute zero, because of the uncertainty principle.
Temperature is a measure of a quality of a state of a material The quality may be regarded as a more abstract entity than any particular temperature scale that measures it, and is called hotness by some writers. The quality of hotness refers to the state of material only in a particular locality, and in general, apart from bodies held in a steady state of thermodynamic equilibrium, hotness varies from place to place. It is not necessarily the case that a material in a particular place is in a state that is steady and nearly homogeneous enough to allow it to have a well-defined hotness or temperature. Hotness may be represented abstractly as a one-dimensional manifold. Every valid temperature scale has its own one-to-one map into the hotness manifold.
When two systems in thermal contact are at the same temperature no heat transfers between them. When a temperature difference does exist heat flows spontaneously from the warmer system to the colder system until they are in thermal equilibrium. Heat transfer occurs by conduction or by thermal radiation.
Experimental physicists, for example Galileo and Newton, found that there are indefinitely many empirical temperature scales. Nevertheless, the zeroth law of thermodynamics says that they all measure the same quality.
Temperature for bodies in thermodynamic equilibrium
For experimental physics, hotness means that, when comparing any two given bodies in their respective separate thermodynamic equilibria, any two suitably given empirical thermometers with numerical scale readings will agree as to which is the hotter of the two given bodies, or that they have the same temperature. This does not require the two thermometers to have a linear relation between their numerical scale readings, but it does require that the relation between their numerical readings shall be strictly monotonic. A definite sense of greater hotness can be had, independently of calorimetry, of thermodynamics, and of properties of particular materials, from Wien's displacement law of thermal radiation: the temperature of a bath of thermal radiation is proportional, by a universal constant, to the frequency of the maximum of its frequency spectrum; this frequency is always positive, but can have values that tend to zero. Thermal radiation is initially defined for a cavity in thermodynamic equilibrium. These physical facts justify a mathematical statement that hotness exists on an ordered one-dimensional manifold. This is a fundamental character of temperature and thermometers for bodies in their own thermodynamic equilibrium.
Except for a system undergoing a first-order phase change such as the melting of ice, as a closed system receives heat, without change in its volume and without change in external force fields acting on it, its temperature rises. For a system undergoing such a phase change so slowly that departure from thermodynamic equilibrium can be neglected, its temperature remains constant as the system is supplied with latent heat. Conversely, a loss of heat from a closed system, without phase change, without change of volume, and without change in external force fields acting on it, decreases its temperature.
Temperature for bodies in a steady state but not in thermodynamic equilibrium
While for bodies in their own thermodynamic equilibrium states, the notion of temperature requires that all empirical thermometers must agree as to which of two bodies is the hotter or that they are at the same temperature, this requirement is not safe for bodies that are in steady states though not in thermodynamic equilibrium. It can then well be that different empirical thermometers disagree about which is the hotter, and if this is so, then at least one of the bodies does not have a well defined absolute thermodynamic temperature. Nevertheless, any one given body and any one suitable empirical thermometer can still support notions of empirical, non-absolute, hotness and temperature, for a suitable range of processes. This is a matter for study in non-equilibrium thermodynamics.
Temperature for bodies not in a steady state
When a body is not in a steady state, then the notion of temperature becomes even less safe than for a body in a steady state not in thermodynamic equilibrium. This is also a matter for study in non-equilibrium thermodynamics.
Thermodynamic equilibrium axiomatics
For axiomatic treatment of thermodynamic equilibrium, since the 1930s, it has become customary to refer to a zeroth law of thermodynamics. The customarily stated minimalist version of such a law postulates only that all bodies, which when thermally connected would be in thermal equilibrium, should be said to have the same temperature by definition, but by itself does not establish temperature as a quantity expressed as a real number on a scale. A more physically informative version of such a law views empirical temperature as a chart on a hotness manifold. While the zeroth law permits the definitions of many different empirical scales of temperature, the second law of thermodynamics selects the definition of a single preferred, absolute temperature, unique up to an arbitrary scale factor, whence called the thermodynamic temperature. If internal energy is considered as a function of the volume and entropy of a homogeneous system in thermodynamic equilibrium, thermodynamic absolute temperature appears as the partial derivative of internal energy with respect the entropy at constant volume. Its natural, intrinsic origin or null point is absolute zero at which the entropy of any system is at a minimum. Although this is the lowest absolute temperature described by the model, the third law of thermodynamics postulates that absolute zero cannot be attained by any physical system.
When a sample is heated, meaning it receives thermal energy from an external source, some of the introduced heat is converted into kinetic energy, the rest to other forms of internal energy, specific to the material. The amount converted into kinetic energy causes the temperature of the material to rise. The introduced heat () divided by the observed temperature change is the heat capacity (C) of the material.
If heat capacity is measured for a well defined amount of substance, the specific heat is the measure of the heat required to increase the temperature of such a unit quantity by one unit of temperature. For example, to raise the temperature of water by one kelvin (equal to one degree Celsius) requires 4186 joules per kilogram (J/kg)..
Temperature measurement using modern scientific thermometers and temperature scales goes back at least as far as the early 18th century, when Gabriel Fahrenheit adapted a thermometer (switching to mercury) and a scale both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use in the United States for non-scientific applications.
Temperature is measured with thermometers that may be calibrated to a variety of temperature scales. In most of the world (except for Belize, Myanmar, Liberia and the United States), the Celsius scale is used for most temperature measuring purposes. Most scientists measure temperature using the Celsius scale and thermodynamic temperature using the Kelvin scale, which is the Celsius scale offset so that its null point is 0K = −273.15°C, or absolute zero. Many engineering fields in the U.S., notably high-tech and US federal specifications (civil and military), also use the Kelvin and Celsius scales. Other engineering fields in the U.S. also rely upon the Rankine scale (a shifted Fahrenheit scale) when working in thermodynamic-related disciplines such as combustion.
For everyday applications, it is often convenient to use the Celsius scale, in which 0°C corresponds very closely to the freezing point of water and 100°C is its boiling point at sea level. Because liquid droplets commonly exist in clouds at sub-zero temperatures, 0°C is better defined as the melting point of ice. In this scale a temperature difference of 1 degree Celsius is the same as a 1kelvin increment, but the scale is offset by the temperature at which ice melts (273.15 K).
By international agreement the Kelvin and Celsius scales are defined by two fixing points: absolute zero and the triple point of Vienna Standard Mean Ocean Water, which is water specially prepared with a specified blend of hydrogen and oxygen isotopes. Absolute zero is defined as precisely 0K and −273.15°C. It is the temperature at which all classical translational motion of the particles comprising matter ceases and they are at complete rest in the classical model. Quantum-mechanically, however, zero-point motion remains and has an associated energy, the zero-point energy. Matter is in its ground state, and contains no thermal energy. The triple point of water is defined as 273.16K and 0.01°C. This definition serves the following purposes: it fixes the magnitude of the kelvin as being precisely 1 part in 273.16 parts of the difference between absolute zero and the triple point of water; it establishes that one kelvin has precisely the same magnitude as one degree on the Celsius scale; and it establishes the difference between the null points of these scales as being 273.15K (0K = −273.15°C and 273.16K = 0.01°C).
In the United States, the Fahrenheit scale is widely used. On this scale the freezing point of water corresponds to 32 °F and the boiling point to 212 °F. The Rankine scale, still used in fields of chemical engineering in the U.S., is an absolute scale based on the Fahrenheit increment.
The following table shows the temperature conversion formulas for conversions to and from the Celsius scale.
|from Celsius||to Celsius|
|Fahrenheit||[°F] = [°C] × 9⁄5 + 32||[°C] = ([°F] − 32) × 5⁄9|
|Kelvin||[K] = [°C] + 273.15||[°C] = [K] − 273.15|
|Rankine||[°R] = ([°C] + 273.15) × 9⁄5||[°C] = ([°R] − 491.67) × 5⁄9|
|Delisle||[°De] = (100 − [°C]) × 3⁄2||[°C] = 100 − [°De] × 2⁄3|
|Newton||[°N] = [°C] × 33⁄100||[°C] = [°N] × 100⁄33|
|Réaumur||[°Ré] = [°C] × 4⁄5||[°C] = [°Ré] × 5⁄4|
|Rømer||[°Rø] = [°C] × 21⁄40 + 7.5||[°C] = ([°Rø] − 7.5) × 40⁄21|
The field of plasma physics deals with phenomena of electromagnetic nature that involve very high temperatures. It is customary to express temperature in electronvolts (eV) or kiloelectronvolts (keV), where 1 eV = 11605K. In the study of QCD matter one routinely encounters temperatures of the order of a few hundred MeV, equivalent to about 1012K.
Historically, there are several scientific approaches to the explanation of temperature: the classical thermodynamic description based on macroscopic empirical variables that can be measured in a laboratory; the kinetic theory of gases which relates the macroscopic description to the probability distribution of the energy of motion of gas particles; and a microscopic explanation based on statistical physics and quantum mechanics. In addition, rigorous and purely mathematical treatments have provided an axiomatic approach to classical thermodynamics and temperature. Statistical physics provides a deeper understanding by describing the atomic behavior of matter, and derives macroscopic properties from statistical averages of microscopic states, including both classical and quantum states. In the fundamental physical description, using natural units, temperature may be measured directly in units of energy. However, in the practical systems of measurement for science, technology, and commerce, such as the modern metric system of units, the macroscopic and the microscopic descriptions are interrelated by the Boltzmann constant, a proportionality factor that scales temperature to the microscopic mean kinetic energy.
The microscopic description in statistical mechanics is based on a model that analyzes a system into its fundamental particles of matter or into a set of classical or quantum-mechanical oscillators and considers the system as a statistical ensemble of microstates. As a collection of classical material particles, temperature is a measure of the mean energy of motion, called kinetic energy, of the particles, whether in solids, liquids, gases, or plasmas. The kinetic energy, a concept of classical mechanics, is half the mass of a particle times its speed squared. In this mechanical interpretation of thermal motion, the kinetic energies of material particles may reside in the velocity of the particles of their translational or vibrational motion or in the inertia of their rotational modes. In monoatomic perfect gases and, approximately, in most gases, temperature is a measure of the mean particle kinetic energy. It also determines the probability distribution function of the energy. In condensed matter, and particularly in solids, this purely mechanical description is often less useful and the oscillator model provides a better description to account for quantum mechanical phenomena. Temperature determines the statistical occupation of the microstates of the ensemble. The microscopic definition of temperature is only meaningful in the thermodynamic limit, meaning for large ensembles of states or particles, to fulfill the requirements of the statistical model.
In the context of thermodynamics, the kinetic energy is also referred to as thermal energy. The thermal energy may be partitioned into independent components attributed to the degrees of freedom of the particles or to the modes of oscillators in a thermodynamic system. In general, the number of these degrees of freedom that are available for the equipartitioning of energy depend on the temperature, i.e. the energy region of the interactions under consideration. For solids, the thermal energy is associated primarily with the vibrations of its atoms or molecules about their equilibrium position. In an ideal monatomic gas, the kinetic energy is found exclusively in the purely translational motions of the particles. In other systems, vibrational and rotational motions also contribute degrees of freedom.
Kinetic theory of gases
The kinetic theory of gases uses the model of the ideal gas to relate temperature to the average translational kinetic energy of the molecules in a container of gas in thermodynamic equilibrium.
Classical mechanics defines the translational kinetic energy of a gas molecule as follows:
where m is the particle mass and v its speed, the magnitude of its velocity. The distribution of the speeds (which determine the translational kinetic energies) of the particles in a classical ideal gas is called the Maxwell-Boltzmann distribution. The temperature of a classical ideal gas is related to its average kinetic energy per degree of freedom Ek via the equation:
where the Boltzmann constant (n = Avogadro number, R = ideal gas constant). This relation is valid in the ideal gas regime, i.e. when the particle density is much less than , where is the thermal de Broglie wavelength. A monoatomic gas has only the three translational degrees of freedom.
The zeroth law of thermodynamics implies that any two given systems in thermal equilibrium have the same temperature. In statistical thermodynamics, it can be deduced from the second law of thermodynamics that they also have the same average kinetic energy per particle.
In a mixture of particles of various masses, lighter particles move faster than do heavier particles, but have the same average kinetic energy. A neon atom moves slowly relative to a hydrogen molecule of the same kinetic energy. A pollen particle suspended in water moves in a slow Brownian motion among fast-moving water molecules.
Zeroth law of thermodynamics
It has long been recognized that if two bodies of different temperatures are brought into thermal connection, conductive or radiative, they exchange heat accompanied by changes of other state variables. Left isolated from other bodies, the two connected bodies eventually reach a state of thermal equilibrium in which no further changes occur. This basic knowledge is relevant to thermodynamics. Some approaches to thermodynamics take this basic knowledge as axiomatic, other approaches select only one narrow aspect of this basic knowledge as axiomatic, and use other axioms to justify and express deductively the remaining aspects of it. The one aspect chosen by the latter approaches is often stated in textbooks as the zeroth law of thermodynamics, but other statements of this basic knowledge are made by various writers.
The usual textbook statement of the zeroth law of thermodynamics is that if two systems are each in thermal equilibrium with a third system, then they are also in thermal equilibrium with each other. This statement is taken to justify a statement that all three systems have the same temperature, but, by itself, it does not justify the idea of temperature as a numerical scale for a concept of hotness which exists on a one-dimensional manifold with a sense of greater hotness. Sometimes the zeroth law is stated to provide the latter justification. For suitable systems, an empirical temperature scale may be defined by the variation of one of the other state variables, such as pressure, when all other coordinates are fixed. The second law of thermodynamics is used to define an absolute thermodynamic temperature scale for systems in thermal equilibrium.
A temperature scale is based on the properties of some reference system to which other thermometers may be calibrated. One such reference system is a fixed quantity of gas. The ideal gas law indicates that the product of the pressure (p) and volume (V) of a gas is directly proportional to the thermodynamic temperature:
where T is temperature, n is the number of moles of gas and R = 8.314472(15) Jmol-1K-1 is the gas constant. Reformulating the pressure-volume term as the sum of classical mechanical particle energies in terms of particle mass, m, and root-mean-square particle speed v, the ideal gas law directly provides the relationship between kinetic energy and temperature:
Thus, one can define a scale for temperature based on the corresponding pressure and volume of the gas: the temperature in kelvins is the pressure in pascals of one mole of gas in a container of one cubic metre, divided by the gas constant. In practice, such a gas thermometer is not very convenient, but other thermometers can be calibrated to this scale.
The pressure, volume, and the number of moles of a substance are all inherently greater than or equal to zero, suggesting that temperature must also be greater than or equal to zero. As a practical matter it is not possible to use a gas thermometer to measure absolute zero temperature since the gasses tend to condense into a liquid long before the temperature reaches zero. It is possible, however, to extrapolate to absolute zero by using the ideal gas law.
Second law of thermodynamics
In the previous section certain properties of temperature were expressed by the zeroth law of thermodynamics. It is also possible to define temperature in terms of the second law of thermodynamics which deals with entropy. Entropy is often thought of as a measure of the disorder in a system. The second law states that any process will result in either no change or a net increase in the entropy of the universe. This can be understood in terms of probability.
For example, in a series of coin tosses, a perfectly ordered system would be one in which either every toss comes up heads or every toss comes up tails. This means that for a perfectly ordered set of coin tosses, there is only one set of toss outcomes possible: the set in which 100% of tosses come up the same. On the other hand, there are multiple combinations that can result in disordered or mixed systems, where some fraction are heads and the rest tails. A disordered system can be 90% heads and 10% tails, or it could be 98% heads and 2% tails, et cetera. As the number of coin tosses increases, the number of possible combinations corresponding to imperfectly ordered systems increases. For a very large number of coin tosses, the combinations to ~50% heads and ~50% tails dominates and obtaining an outcome significantly different from 50/50 becomes extremely unlikely. Thus the system naturally progresses to a state of maximum disorder or entropy.
It has been previously stated that temperature governs the transfer of heat between two systems and it was just shown that the universe tends to progress so as to maximize entropy, which is expected of any natural system. Thus, it is expected that there is some relationship between temperature and entropy. To find this relationship, the relationship between heat, work and temperature is first considered. A heat engine is a device for converting thermal energy into mechanical energy, resulting in the performance of work, and analysis of the Carnot heat engine provides the necessary relationships. The work from a heat engine corresponds to the difference between the heat put into the system at the high temperature, qH and the heat ejected at the low temperature, qC. The efficiency is the work divided by the heat put into the system or:
where wcy is the work done per cycle. The efficiency depends only on qC/qH. Because qC and qH correspond to heat transfer at the temperatures TC and TH, respectively, qC/qH should be some function of these temperatures:
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, a heat engine operating between T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and T2, and the second between T2 and T3. This can only be the case if:
Since the first function is independent of T2, this temperature must cancel on the right side, meaning f(T1,T3) is of the form g(T1)/g(T3) (i.e. f(T1,T3) = f(T1,T2)f(T2,T3) = g(T1)/g(T2)· g(T2)/g(T3) = g(T1)/g(T3)), where g is a function of a single temperature. A temperature scale can now be chosen with the property that:
Substituting Equation 4 back into Equation 2 gives a relationship for the efficiency in terms of temperature:
Notice that for TC = 0 K the efficiency is 100% and that efficiency becomes greater than 100% below 0 K. Since an efficiency greater than 100% violates the first law of thermodynamics, this implies that 0 K is the minimum possible temperature. In fact the lowest temperature ever obtained in a macroscopic system was 20 nK, which was achieved in 1995 at NIST. Subtracting the right hand side of Equation 5 from the middle portion and rearranging gives:
where the negative sign indicates heat ejected from the system. This relationship suggests the existence of a state function, S, defined by:
where the subscript indicates a reversible process. The change of this state function around any cycle is zero, as is necessary for any state function. This function corresponds to the entropy of the system, which was described previously. Rearranging Equation 6 gives a new definition for temperature in terms of entropy and heat:
For a system, where entropy S(E) is a function of its energy E, the temperature T is given by:
i.e. the reciprocal of the temperature is the rate of increase of entropy with respect to energy.
Definition from statistical mechanics
Statistical mechanics defines temperature based on a system's fundamental degrees of freedom. Eq.(8) is the defining relation of temperature. Eq. (7) can be derived from the principles underlying the fundamental thermodynamic relation.
Generalized temperature from single particle statistics
It is possible to extend the definition of temperature even to systems of few particles, like in a quantum dot. The generalized temperature is obtained by considering time ensembles instead of configuration space ensembles given in statistical mechanics in the case of thermal and particle exchange between a small system of fermions (N even less than 10) with a single/double occupancy system. The finite quantum grand canonical ensemble, obtained under the hypothesis of ergodicity and orthodicity, allows to express the generalized temperature from the ratio of the average time of occupation 1 and 2 of the single/double occupancy system:
where EF is the Fermi energy which tends to the ordinary temperature when N goes to infinity.
On the empirical temperature scales, which are not referenced to absolute zero, a negative temperature is one below the zero-point of the scale used. For example, dry ice has a sublimation temperature of −78.5°C which is equivalent to −109.3°F. On the absolute Kelvin scale, however, this temperature is 194.6 K. On the absolute scale of thermodynamic temperature no material can exhibit a temperature smaller than or equal to 0 K, both of which are forbidden by the third law of thermodynamics.
In the quantum mechanical description of electron and nuclear spin systems that have a limited number of possible states, and therefore a discrete upper limit of energy they can attain, it is possible to obtain a negative temperature, which is numerically indeed less than absolute zero. However, this is not the macroscopic temperature of the material, but instead the temperature of only very specific degrees of freedom, that are isolated from others and do not exchange energy by virtue of the equipartition theorem.
A negative temperature is experimentally achieved with suitable radio frequency techniques that cause a population inversion of spin states from the ground state. As the energy in the system increases upon population of the upper states, the entropy increases as well, as the system becomes less ordered, but attains a maximum value when the spins are evenly distributed among ground and excited states, after which it begins to decrease, once again achieving a state of higher order as the upper states begin to fill exclusively. At the point of maximum entropy, the temperature function shows the behavior of a singularity, because the slope of the entropy function decreases to zero at first and then turns negative. Since temperature is the inverse of the derivative of the entropy, the temperature formally goes to infinity at this point, and switches to negative infinity as the slope turns negative. At energies higher than this point, the spin degree of freedom therefore exhibits formally a negative thermodynamic temperature. As the energy increases further by continued population of the excited state, the negative temperature approaches zero asymptotically. As the energy of the system increases in the population inversion, a system with a negative temperature is not colder than absolute zero, but rather it has a higher energy than at positive temperature, and may be said to be in fact hotter at negative temperatures. When brought into contact with a system at a positive temperature, energy will be transferred from the negative temperature regime to the positive temperature region.
Examples of temperature
|Temperature||Peak emittance wavelength
of black-body radiation
(precisely by definition)
|0 K||−273.15 °C||cannot be defined|
|100 pK||−273.149999999900 °C||29,000 km|
|450 pK||−273.14999999955 °C||6,400 km|
(precisely by definition)
|0.001 K||−273.149 °C||2.89777 m
(radio, FM band)
|Water's triple point
(precisely by definition)
|273.16 K||0.01 °C||10,608.3 nm
(long wavelength I.R.)
|Water's boiling point[A]||373.1339 K||99.9839 °C||7,766.03 nm
(mid wavelength I.R.)
|Incandescent lamp[B]||2500 K||≈2,200 °C||1,160 nm
|Sun's visible surface[D]||5,778 K||5,505 °C||501.5 nm
|28 kK||28,000 °C||100 nm
(far ultraviolet light)
|Sun's core[E]||16 MK||16 million °C||0.18 nm (X-rays)|
|350 MK||350 million °C||8.3×10−3 nm
|Sandia National Labs'
|2 GK||2 billion °C||1.4×10−3 nm
|Core of a high-mass
star on its last day[E]
|3 GK||3 billion °C||1×10−3 nm
|Merging binary neutron
|350 GK||350 billion °C||8×10−6 nm
|1 TK||1 trillion °C||3×10−6 nm
|CERN's proton vs
|10 TK||10 trillion °C||3×10−7 nm
|Universe 5.391×10−44 s
after the Big Bang[E]
|1.417×1032 K||1.417×1032 °C||1.616×10−26 nm
- A For Vienna Standard Mean Ocean Water at one standard atmosphere (101.325 kPa) when calibrated strictly per the two-point definition of thermodynamic temperature.
- B The 2500 K value is approximate. The 273.15 K difference between K and °C is rounded to 300 K to avoid false precision in the Celsius value.
- C For a true black-body (which tungsten filaments are not). Tungsten filaments' emissivity is greater at shorter wavelengths, which makes them appear whiter.
- D Effective photosphere temperature. The 273.15 K difference between K and °C is rounded to 273 K to avoid false precision in the Celsius value.
- E The 273.15 K difference between K and °C is without the precision of these values.
- F For a true black-body (which the plasma was not). The Z machine's dominant emission originated from 40 MK electrons (soft x–ray emissions) within the plasma.
- Scale of temperature
- Atmospheric temperature
- Color temperature
- Dry-bulb temperature
- Heat conduction
- Heat convection
- ISO 1
- Maxwell's demon
- Orders of magnitude (temperature)
- Outside air temperature
- Planck temperature
- Rankine scale
- Relativistic heat conduction
- Stagnation temperature
- Thermal radiation
- Thermodynamic (absolute) temperature
- Body temperature (Thermoregulation)
- Virtual temperature
- Wet Bulb Globe Temperature
- Wet-bulb temperature
- Historically, the Celsius scale was a purely empirical temperature scale defined only by the freezing and boiling points of water. Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale.
- Maxwell, J.C. (1871). Theory of Heat, Longmans, Green, and Co., London. p. 2.
- Maxwell, J.C. (1871). Theory of Heat, Longmans, Green, and Co., London. p. 2.
- Thomson, W. (1848). On an absolute thermometric scale founded on Carnot's theory of the motive power of heat, and calculated from Regnault's observations, Proc. Cambridge Phil. Soc. (1843/1863) 1, No. 5: 66–71.
- Thomson, W. (March 1851). "On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule’s equivalent of a Thermal Unit, and M. Regnault’s Observations on Steam". Transactions of the Royal Society of Edinburgh XX (part II): 261–268; 289–298.
- Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green & Co., London, pp. 175–177.
- Buchdahl, H.A (1986). On the redundancy of the zeroth law of thermodynamics, J. Phys. A, Math. Gen., 19: L561–L564.
- C. Carathéodory (1909). "Untersuchungen über die Grundlagen der Thermodynamik". Mathematische Annalen 67: 355–386. doi:10.1007/BF01450409. A partly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
- Maxwell, J.C. (1871). Theory of Heat, Longmans, Green, and Co., London, p. 57.
- Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, page 24.
- Quinn, T.J. (1983). Temperature, Academic Press, London, ISBN0-12-569680-9, pp. 160–162.
- Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA, pp. 47,57.
- Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6, pp. 49, 69.
- Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, pp. 14–15, 214.
- Callen, H.B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0–471–86256–8, pp. 146–148.
- Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics. From Heat Engines to Dissipative Structures, John Wiley, Chichester, ISBN 0-471-97394-7, pp. 115–116.
- Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA, p. 58.
- Milne, E.A. (1929). The effect of collisions on monochromatic radiative equilibrium, Monthly Notices of the Royal Astronomical Society, 88: 493–502.
- Gyarmati, I. (1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated by E. Gyarmati and W.F. Heinz, Springer, Berlin, pp. 63–66
- Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley, London, ISBN 0-471-30280-5, pp. 14–16.
- Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, pp. 133–135.
- Callen, H.B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0–471–86256–8, pp. 309–310.
- Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig, page 3.
- Mach, E. (1900). Die Principien der Wärmelehre. Historisch-kritisch entwickelt, Johann Ambrosius Barth, Leipzig, section 22, pages 56-57.
- Serrin, J. (1986). Chapter 1, 'An Outline of Thermodynamical Structure', pages 3-32, especially page 6, in New Perspectives in Thermodynamics, edited by J. Serrin, Springer, Berlin, ISBN 3-540-15931-2.
- Maxwell, J.C. (1872). Theory of Heat, third edition, Longmans, Green, London, page 32.
- Tait, P.G. (1884). Heat, Macmillan, London, Chapter VII, pages 39-40.
- Planck, M. (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green, London, pages 1-2.
- Planck, M. (1914), The Theory of Heat Radiation, second edition, translated into English by M. Masius, Blakiston's Son & Co., Philadelphia, reprinted by Kessinger.
- J. S. Dugdale (1996, 1998). Entropy and its Physical Interpretation. Taylor & Francis. p. 13. ISBN 978-0-7484-0569-5.
- F. Reif (1965). Fundamentals of Statistical and Thermal Physics. McGraw-Hill. p. 102.
- M. J. Moran, H. N. Shapiro (2006). "1.6.1". Fundamentals of Engineering Thermodynamics (5 ed.). John Wiley & Sons, Ltd. p. 14. ISBN 978-0-470-03037-0.
- T.W. Leland, Jr. "Basic Principles of Classical and Statistical Thermodynamics". p. 14. "Consequently we identify temperature as a driving force which causes something called heat to be transferred."
- Tait, P.G. (1884). Heat, Macmillan, London, Chapter VII, pages 42, 103-117.
- Beattie, J.A., Oppenheim, I. (1979). Principles of Thermodynamics, Elsevier Scientific Publishing Company, Amsterdam, 0–444–41806–7, page 29.
- Landsberg, P.T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience Publishers, New York, page 17.
- Thomsen, J.S. (1962). "A restatement of the zeroth law of thermodynamics". Am. J. Phys. 30: 294–296.
- Maxwell, J.C. (1872). Theory of Heat, third edition, Longman's, Green & Co, London, page 45.
- Truesdell, C.A. (1980). The Tragicomical History of Thermodynamics, 1822-1854, Springer, New York, ISBN 0-387-90403-4, Sections 11 B, 11H, pages 306–310, 320-332.
- Pitteri, M. (1984). On the axiomatic foundations of temperature, Appendix G6 on pages 522-544 of Rational Thermodynamics, C. Truesdell, second edition, Springer, New York, ISBN 0-387-90874-9.
- Truesdell, C., Bharatha, S. (1977). The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines, Rigorously Constructed upon the Foundation Laid by S. Carnot and F. Reech, Springer, New York, ISBN 0-387-07971-8, page 20.
- Serrin, J. (1978). The concepts of thermodynamics, in Contemporary Developments in Continuum Mechanics and Partial Differential Equations. Proceedings of the International Symposium on Continuum Mechanics and Partial Differential Equations, Rio de Janiero, August 1977, edited by G.M. de La Penha, L.A.J. Medeiros, North-Holland, Amsterdam, ISBN 0-444-85166-6, pages 411-451.
- Maxwell, J.C. (1872). Theory of Heat, third edition, Longmans, Green, London, pages 155-158.
- Tait, P.G. (1884). Heat, Macmillan, London, Chapter VII, Section 95, pages 68-69.
- H.A. Buchdahl (1966). The Concepts of Classical Thermodynamics. Cambridge University Press. p. 73.
- Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, Section 32., pages 106-108.
- The kelvin in the SI Brochure
- "Absolute Zero". Calphad.com. Retrieved 2010-09-16.
- C. Caratheodory (1909). "Untersuchungen über die Grundlagen der Thermodynamik". Mathematische Annalen 67 (3): 355–386. doi:10.1007/BF01450409.
- Balescu, R. (1975). Equilibrium and Nonequilibrium Statistical Mechanics, Wiley, New York, ISBN 0-471-04600-0, pages 148-154.
- Kittel, Charles; Kroemer, Herbert (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. pp. 391–397. ISBN 0-7167-1088-9.
- Kondepudi, D.K. (1987). "Microscopic aspects implied by the second law". Foundations of Physics 17: 713–722.
- Tolman, R.C. (1938). The Principles of Statistical Mechanics, Oxford University Press, London, pp. 93, 655.
- Feynman, R.P., Leighton, R.B., Sands, M. (1963). The Feynman Lectures on Physics, Addison-Wesley, Reading MA, volume 1, pages 39–6 to 39–12.
- Peter Atkins, Julio de Paula (2006). Physical Chemistry (8 ed.). Oxford University Press. p. 9.
- Prati, E. (2010). "The finite quantum grand canonical ensemble and temperature from single-electron statistics for a mesoscopic device". J. Stat. Mech. 1: P01003. arXiv:1001.2342. Bibcode:2010JSMTE..01..003P. doi:10.1088/1742-5468/2010/01/P01003. arxiv.org
- Prati, E., et al. (2010). "Measuring the temperature of a mesoscopic electron system by means of single electron statistics". Applied Physics Letters 96 (11): 113109. arXiv:1002.0037. Bibcode:2010ApPhL..96k3109P. doi:10.1063/1.3365204. arxiv.org
- Kittel, Charles; Kroemer, Herbert (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. pp. Appendix E. ISBN 0-7167-1088-9.
- The cited emission wavelengths are for black bodies in equilibrium. CODATA 2006 recommended value of 2.8977685(51)×10−3 m K used for Wien displacement law constant b.
- "World record in low temperatures". Retrieved 2009-05-05.
- A temperature of 450 ±80 pK in a Bose–Einstein condensate (BEC) of sodium atoms was achieved in 2003 by researchers at MIT. Citation: Cooling Bose–Einstein Condensates Below 500 Picokelvin, A. E. Leanhardt et al., Science 301, 12 Sept. 2003, p. 1515. It's noteworthy that this record's peak emittance black-body wavelength of 6,400 kilometers is roughly the radius of Earth.
- The peak emittance wavelength of 2.89777 m is a frequency of 103.456 MHz
- Measurement was made in 2002 and has an uncertainty of ±3 kelvin. A 1989 measurement produced a value of 5,777.0±2.5 K. Citation: Overview of the Sun (Chapter 1 lecture notes on Solar Physics by Division of Theoretical Physics, Dept. of Physical Sciences, University of Helsinki).
- The 350 MK value is the maximum peak fusion fuel temperature in a thermonuclear weapon of the Teller–Ulam configuration (commonly known as a hydrogen bomb). Peak temperatures in Gadget-style fission bomb cores (commonly known as an atomic bomb) are in the range of 50 to 100 MK. Citation: Nuclear Weapons Frequently Asked Questions, 3.2.5 Matter At High Temperatures. Link to relevant Web page. All referenced data was compiled from publicly available sources.
- Peak temperature for a bulk quantity of matter was achieved by a pulsed-power machine used in fusion physics experiments. The term bulk quantity draws a distinction from collisions in particle accelerators wherein high temperature applies only to the debris from two subatomic particles or nuclei at any given instant. The >2 GK temperature was achieved over a period of about ten nanoseconds during shot Z1137. In fact, the iron and manganese ions in the plasma averaged 3.58±0.41 GK (309±35 keV) for 3 ns (ns 112 through 115). Ion Viscous Heating in a Magnetohydrodynamically Unstable Z Pinch at Over 2×109 Kelvin, M. G. Haines et al., Physical Review Letters 96 (2006) 075003. Link to Sandia's news release.
- Core temperature of a high–mass (>8–11 solar masses) star after it leaves the main sequence on the Hertzsprung–Russell diagram and begins the alpha process (which lasts one day) of fusing silicon–28 into heavier elements in the following steps: sulfur–32 → argon–36 → calcium–40 → titanium–44 → chromium–48 → iron–52 → nickel–56. Within minutes of finishing the sequence, the star explodes as a Type II supernova. Citation: Stellar Evolution: The Life and Death of Our Luminous Neighbors (by Arthur Holland and Mark Williams of the University of Michigan). Link to Web site. More informative links can be found here , and here , and a concise treatise on stars by NASA is here .[dead link]
- Based on a computer model that predicted a peak internal temperature of 30 MeV (350 GK) during the merger of a binary neutron star system (which produces a gamma–ray burst). The neutron stars in the model were 1.2 and 1.6 solar masses respectively, were roughly 20 km in diameter, and were orbiting around their barycenter (common center of mass) at about 390 Hz during the last several milliseconds before they completely merged. The 350 GK portion was a small volume located at the pair's developing common core and varied from roughly 1 to 7 km across over a time span of around 5 ms. Imagine two city-sized objects of unimaginable density orbiting each other at the same frequency as the G4 musical note (the 28th white key on a piano). It's also noteworthy that at 350 GK, the average neutron has a vibrational speed of 30% the speed of light and a relativistic mass (m) 5% greater than its rest mass (m0). Torus Formation in Neutron Star Mergers and Well-Localized Short Gamma-Ray Bursts, R. Oechslin et al. of Max Planck Institute for Astrophysics., arXiv:astro-ph/0507099 v2, 22 Feb. 2006. An html summary.
- Results of research by Stefan Bathe using the PHENIX detector on the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in Upton, New York, U.S.A. Bathe has studied gold-gold, deuteron-gold, and proton-proton collisions to test the theory of quantum chromodynamics, the theory of the strong force that holds atomic nuclei together. Link to news release.
- How do physicists study particles? by CERN.
- The Planck frequency equals 1.85487(14)×1043 Hz (which is the reciprocal of one Planck time). Photons at the Planck frequency have a wavelength of one Planck length. The Planck temperature of 1.41679(11)×1032 K equates to a calculated b /T = λmax wavelength of 2.04531(16)×10−26 nm. However, the actual peak emittance wavelength quantizes to the Planck length of 1.61624(12)×10−26 nm.
- Chang, Hasok (2004). Inventing Temperature: Measurement and Scientific Progress. Oxford: Oxford University Press. ISBN 978-0-19-517127-3.
- Zemansky, Mark Waldo (1964). Temperatures Very Low and Very High. Princeton, N.J.: Van Nostrand.
- T. J. Quinn (1983), Temperature, Academic Press, London.
|Wikimedia Commons has media related to: Temperature|
|Look up temperature in Wiktionary, the free dictionary.|
- An elementary introduction to temperature aimed at a middle school audience
- What is Temperature? An introductory discussion of temperature as a manifestation of kinetic theory.
- from Oklahoma State University
- Average yearly temperature by country A tabular list of countries and Thermal Map displaying the average yearly temperature by country | http://en.wikipedia.org/wiki/Temperature | 13 |
50 | SECTION 26.1 THE CONCEPT OF VOLUME
The concept of "volume" applies to objects that are "solid bodies." The "volume" of a "solid body" is the amount of "space" it
occupies. Below are some examples of "solid bodies."
SECTION 26.2 WHAT IS A CUBE?
First you must understand the concept of the solid called a "cube." A
cube looks like the following:
A cube has six "faces," each face is a
square and all six squares are the same size, i.e. congruent.
A cube has 12 "edges" which are the sides of the square
faces. All edges are the same length. Each edge meets another edge at a
SECTION 26.3 HOW IS VOLUME
MEASURED AND WHAT IS A UNIT CUBE?
You are familiar with the following concepts of measurements called "units": inch foot, yard, mile, etc. I talked about units in
a earlier section. I will start with the most fundamental volume of a solid - "the unit cube or 1 cubic unit."
A cube whose edges are all 1 inch is said to have a
volume of 1 cubic inch. See image below:
A cube whose edges are all 1 foot is said to have a
volume of 1 cubic foot. See image below:
A cube whose edges are all 1 yard is said to have a
volume of 1 cubic yard. See image below:
Now I can continue this process with any form of measurement. For example a
volume of 1
or 1 cubic mile, is a cube whose faces sides are 1 mile on each edge.
So now that you understand that the basic definition of volume is based on what is
called a "unit cube," which is "a cube whose edges all
measure one unit" whether that unit be inches, feet, millimeters, yard,
SECTION 26.4 EXTENDING THE
CONCEPT OF VOLUME?
So what would 10 square feet ( 10
mean? Well, it means that you have a solid that would contain 10 unit
What would 30 cubic miles(30
be? It means that you have a solid that would contain 30 unit cubes.
Hopefully you can appreciate the size of the volume of the earth which is
1,097,509,500,000,000,000,000 cubic meters.
SECTION 26.5 IMPORTANT VOLUME FORMULAS
The following volume formulas are important to know:
VOLUME = length x width x height = width 3 = height 3 = length 3
Please note that in a cube the length, width and height are all equal.
VOLUME = length x width x height units3
VOLUME = (4/3)πr3 units3
where r is the radius of the sphere
VOLUME = (1/3)πr2h units3
where r is the radius of the base and h is the height of the cone.
VOLUME = πr2h units3
where r is the radius of the base of the cone and h is the height of the cone
Find the volumes of the following:
a) A sphere with radius 10 feet.
V = (4/3)π(10)3 = 4188.790 in.3
Explanation: There are 4188.790 cubes with each edge of 1 inch that can be packed into this sphere with radius of 10 feet.
b) A rectangular solid with length 30 yards, width 20 yards, height 50 yards.
(30)(20)(50)yd.3 = 30000 yd.3
Explanation: There are 30000 cubes with each edge of 1 yard that can be packed into this rectangular solid.
c) A cube with each side 5 miles
V = (5)(5)(5)
mi.3 = 125 mi.3
Explanation: There are 125 cubes with each edge of 1 mile that can be packed into this cube.
d) A cone with a base radius of 600 inches and a height of 3200 inches
V = (1/3)π(600)2(3200) in.3 = 1,206,371,579 in.3
Explanation: There are 1, 206, 371, 579 cubes with each edge of 1 inch that can be packed into this cone.
e) A cylinder with base radius of 123 kilometers and a height of 345
V =π(123)2 (345)km.3= 16397558.56 km.3
Explanation: There are 16397558.56 cubes with each edge of 1 kilometer that can be packed into this cylinder. | http://deeringmath.com/precalc/precalcbook/chapter26.htm | 13 |
77 | In mathematics, an equivalence relation is a relation that, loosely speaking, partitions a set so that every element of the set is a member of one and only one cell of the partition. Two elements of the set are considered equivalent (with respect to the equivalence relation) if and only if they are elements of the same cell. The intersection of any two different cells is empty; the union of all the cells equals the original set.
Although various notations are used throughout the literature to denote that two elements a and b of a set are equivalent with respect to an equivalence relation R, the most common are "a ~ b" and "a ≡ b", which are used when R is the obvious relation being referenced, and variations of "a ~R b", "a ≡R b", or "aRb" otherwise.
Simple example
Let be a set with an equivalence relation . For this relation are equivalence classes:
Set of all equivalence classes for this relation is .
Equivalence relations
The following are all equivalence relations:
- "Is equal to" on the set of real numbers
- "Has the same birthday as" on the set of all people.
- "Is similar to" on the set of all triangles.
- "Is congruent to" on the set of all triangles.
- "Is congruent to, modulo n" on the integers.
- "Has the same image under a function" on the elements of the domain of the function.
- "Has the same absolute value" on the set of real numbers
- "Has the same cosine" on the set of all angles.
- "Is parallel to" on the set of subspaces of an affine space.
Relations that are not equivalences
- The relation "≥" between real numbers is reflexive and transitive, but not symmetric. For example, 7 ≥ 5 does not imply that 5 ≥ 7. It is, however, a partial order.
- The relation "has a common factor greater than 1 with" between natural numbers greater than 1, is reflexive and symmetric, but not transitive. (Example: The natural numbers 2 and 6 have a common factor greater than 1, and 6 and 3 have a common factor greater than 1, but 2 and 3 do not have a common factor greater than 1).
- The empty relation R on a non-empty set X (i.e. aRb is never true) is vacuously symmetric and transitive, but not reflexive. (If X is also empty then R is reflexive.)
- The relation "is approximately equal to" between real numbers, even if more precisely defined, is not an equivalence relation, because although reflexive and symmetric, it is not transitive, since multiple small changes can accumulate to become a big change. However, if the approximation is defined asymptotically, for example by saying that two functions f and g are approximately equal near some point if the limit of f-g is 0 at that point, then this defines an equivalence relation.
- The relation "is a sibling of" (used to connote pairs of distinct people who have the same parents) on the set of all human beings is not an equivalence relation. Although siblinghood is symmetric (if A is a sibling of B, then B is a sibling of A) and transitive on any 3 distinct people (if A is a sibling of B and C is a sibling of B, then A is a sibling of C, provided A is not C (Note that "is a sibling of" is NOT a transitive relation, since A R B, and B R A implies A R A by transitivity)), it is not reflexive (A cannot be a sibling of A). The small modification, "is a sibling of, or is the same person as", is an equivalence relation.
Connections to other relations
- A partial order is a relation that is reflexive, antisymmetric, and transitive.
- A congruence relation is an equivalence relation whose domain X is also the underlying set for an algebraic structure, and which respects the additional structure. In general, congruence relations play the role of kernels of homomorphisms, and the quotient of a structure by a congruence relation can be formed. In many important cases congruence relations have an alternative representation as substructures of the structure on which they are defined. E.g. the congruence relations on groups correspond to the normal subgroups.
- Equality is both an equivalence relation and a partial order. Equality is also the only relation on a set that is reflexive, symmetric and antisymmetric.
- A strict partial order is irreflexive, transitive, and asymmetric.
- A partial equivalence relation is transitive and symmetric. Transitive and symmetric imply reflexive if and only if for all a∈X, there exists a b∈X such that a~b.
- A reflexive and symmetric relation is a dependency relation, if finite, and a tolerance relation if infinite.
- A preorder is reflexive and transitive.
Well-definedness under an equivalence relation
If ~ is an equivalence relation on X, and P(x) is a property of elements of X, such that whenever x ~ y, P(x) is true if P(y) is true, then the property P is said to be well-defined or a class invariant under the relation ~.
A frequent particular case occurs when f is a function from X to another set Y; if x1 ~ x2 implies f(x1) = f(x2) then f is said to be a morphism for ~, a class invariant under ~, or simply invariant under ~. This occurs, e.g. in the character theory of finite groups. The latter case with the function f can be expressed by a commutative triangle. See also invariant. Some authors use "compatible with ~" or just "respects ~" instead of "invariant under ~".
More generally, a function may map equivalent arguments (under an equivalence relation ~A) to equivalent values (under an equivalence relation ~B). Such a function is known as a morphism from ~A to ~B.
Equivalence class, quotient set, partition
Let X be a nonempty set, and let . Some definitions:
Equivalence class
The set of all a and b for which a ~ b holds make up an equivalence class of X by ~. Let denote the equivalence class to which a belongs. Then all elements of X equivalent to each other are also elements of the same equivalence class.
Quotient set
The set of all possible equivalence classes of X by ~, denoted , is the quotient set of X by ~. If X is a topological space, there is a natural way of transforming X/~ into a topological space; see quotient space for the details.
The projection of ~ is the function defined by which maps elements of X into their respective equivalence classes by ~.
- Theorem on projections: Let the function f: X → B be such that a ~ b → f(a) = f(b). Then there is a unique function g : X/~ → B, such that f = gπ. If f is a surjection and a ~ b ↔ f(a) = f(b), then g is a bijection.
Equivalence kernel
A partition of X is a set P of nonempty subsets of X, such that every element of X is an element of a single element of P. Each element of P is a cell of the partition. Moreover, the elements of P are pairwise disjoint and their union is X.
Counting possible partitions
Let X be a finite set with n elements. Since every equivalence relation over X corresponds to a partition of X, and vice versa, the number of possible equivalence relations on X equals the number of distinct partitions of X, which is the nth Bell number Bn:
where the above is one of the ways to write the nth Bell number.
Fundamental theorem of equivalence relations
- An equivalence relation ~ on a set X partitions X.
- Conversely, corresponding to any partition of X, there exists an equivalence relation ~ on X.
In both cases, the cells of the partition of X are the equivalence classes of X by ~. Since each element of X belongs to a unique cell of any partition of X, and since each cell of the partition is identical to an equivalence class of X by ~, each element of X belongs to a unique equivalence class of X by ~. Thus there is a natural bijection from the set of all possible equivalence relations on X and the set of all partitions of X.
Comparing equivalence relations
If ~ and ≈ are two equivalence relations on the same set S, and a~b implies a≈b for all a,b ∈ S, then ≈ is said to be a coarser relation than ~, and ~ is a finer relation than ≈. Equivalently,
- ~ is finer than ≈ if every equivalence class of ~ is a subset of an equivalence class of ≈, and thus every equivalence class of ≈ is a union of equivalence classes of ~.
- ~ is finer than ≈ if the partition created by ~ is a refinement of the partition created by ≈.
The equality equivalence relation is the finest equivalence relation on any set, while the trivial relation that makes all pairs of elements related is the coarsest.
The relation "~ is finer than ≈" on the collection of all equivalence relations on a fixed set is itself a partial order relation.
Generating equivalence relations
- Given any set X, there is an equivalence relation over the set [X→X] of all possible functions X→X. Two such functions are deemed equivalent when their respective sets of fixpoints have the same cardinality, corresponding to cycles of length one in a permutation. Functions equivalent in this manner form an equivalence class on [X→X], and these equivalence classes partition [X→X].
- An equivalence relation ~ on X is the equivalence kernel of its surjective projection π : X → X/~. Conversely, any surjection between sets determines a partition on its domain, the set of preimages of singletons in the codomain. Thus an equivalence relation over X, a partition of X, and a projection whose domain is X, are three equivalent ways of specifying the same thing.
- The intersection of any collection of equivalence relations over X (viewed as a subset of X × X) is also an equivalence relation. This yields a convenient way of generating an equivalence relation: given any binary relation R on X, the equivalence relation generated by R is the smallest equivalence relation containing R. Concretely, R generates the equivalence relation a ~ b if and only if there exist elements x1, x2, ..., xn in X such that a = x1, b = xn, and (xi,xi+ 1)∈R or (xi+1,xi)∈R, i = 1, ..., n-1.
- Note that the equivalence relation generated in this manner can be trivial. For instance, the equivalence relation ~ generated by:
- Equivalence relations can construct new spaces by "gluing things together." Let X be the unit Cartesian square [0,1] × [0,1], and let ~ be the equivalence relation on X defined by ∀a, b ∈ [0,1] ((a, 0) ~ (a, 1) ∧ (0, b) ~ (1, b)). Then the quotient space X/~ can be naturally identified with a torus: take a square piece of paper, bend and glue together the upper and lower edge to form a cylinder, then bend the resulting cylinder so as to glue together its two open ends, resulting in a torus.
Algebraic structure
Much of mathematics is grounded in the study of equivalences, and order relations. Lattice theory captures the mathematical structure of order relations. Even though equivalence relations are as ubiquitous in mathematics as order relations, the algebraic structure of equivalences is not as well known as that of orders. The former structure draws primarily on group theory and, to a lesser extent, on the theory of lattices, categories, and groupoids.
Group theory
Just as order relations are grounded in ordered sets, sets closed under pairwise supremum and infimum, equivalence relations are grounded in partitioned sets, which are sets closed under bijections and preserve partition structure. Since all such bijections map an equivalence class onto itself, such bijections are also known as permutations. Hence permutation groups (also known as transformation groups) and the related notion of orbit shed light on the mathematical structure of equivalence relations.
Let '~' denote an equivalence relation over some nonempty set A, called the universe or underlying set. Let G denote the set of bijective functions over A that preserve the partition structure of A: ∀x ∈ A ∀g ∈ G (g(x) ∈ [x]). Then the following three connected theorems hold:
- ~ partitions A into equivalence classes. (This is the Fundamental Theorem of Equivalence Relations, mentioned above);
- Given a partition of A, G is a transformation group under composition, whose orbits are the cells of the partition‡;
- Given a transformation group G over A, there exists an equivalence relation ~ over A, whose equivalence classes are the orbits of G.
In sum, given an equivalence relation ~ over A, there exists a transformation group G over A whose orbits are the equivalence classes of A under ~.
This transformation group characterisation of equivalence relations differs fundamentally from the way lattices characterize order relations. The arguments of the lattice theory operations meet and join are elements of some universe A. Meanwhile, the arguments of the transformation group operations composition and inverse are elements of a set of bijections, A → A.
Moving to groups in general, let H be a subgroup of some group G. Let ~ be an equivalence relation on G, such that a ~ b ↔ (ab−1 ∈ H). The equivalence classes of ~—also called the orbits of the action of H on G—are the right cosets of H in G. Interchanging a and b yields the left cosets.
‡Proof. Let function composition interpret group multiplication, and function inverse interpret group inverse. Then G is a group under composition, meaning that ∀x ∈ A ∀g ∈ G ([g(x)] = [x]), because G satisfies the following four conditions:
- G is closed under composition. The composition of any two elements of G exists, because the domain and codomain of any element of G is A. Moreover, the composition of bijections is bijective;
- Existence of identity function. The identity function, I(x)=x, is an obvious element of G;
- Existence of inverse function. Every bijective function g has an inverse g−1, such that gg−1 = I;
- Composition associates. f(gh) = (fg)h. This holds for all functions over all domains.
Let f and g be any two elements of G. By virtue of the definition of G, [g(f(x))] = [f(x)] and [f(x)] = [x], so that [g(f(x))] = [x]. Hence G is also a transformation group (and an automorphism group) because function composition preserves the partitioning of A.
Related thinking can be found in Rosen (2008: chpt. 10).
Categories and groupoids
Let G be a set and let "~" denote an equivalence relation over G. Then we can form a groupoid representing this equivalence relation as follows. The objects are the elements of G, and for any two elements x and y of G, there exists a unique morphism from x to y if and only if x~y.
The advantages of regarding an equivalence relation as a special case of a groupoid include:
- Whereas the notion of "free equivalence relation" does not exist, that of a free groupoid on a directed graph does. Thus it is meaningful to speak of a "presentation of an equivalence relation," i.e., a presentation of the corresponding groupoid;
- Bundles of groups, group actions, sets, and equivalence relations can be regarded as special cases of the notion of groupoid, a point of view that suggests a number of analogies;
- In many contexts "quotienting," and hence the appropriate equivalence relations often called congruences, are important. This leads to the notion of an internal groupoid in a category.
The possible equivalence relations on any set X, when ordered by set inclusion, form a complete lattice, called Con X by convention. The canonical map ker: X^X → Con X, relates the monoid X^X of all functions on X and Con X. ker is surjective but not injective. Less formally, the equivalence relation ker on X, takes each function f: X→X to its kernel ker f. Likewise, ker(ker) is an equivalence relation on X^X.
Equivalence relations and mathematical logic
Equivalence relations are a ready source of examples or counterexamples. For example, an equivalence relation with exactly two infinite equivalence classes is an easy example of a theory which is ω-categorical, but not categorical for any larger cardinal number.
An implication of model theory is that the properties defining a relation can be proved independent of each other (and hence necessary parts of the definition) if and only if, for each property, examples can be found of relations not satisfying the given property while satisfying all the other properties. Hence the three defining properties of equivalence relations can be proved mutually independent by the following three examples:
- Reflexive and transitive: The relation ≤ on N. Or any preorder;
- Symmetric and transitive: The relation R on N, defined as aRb ↔ ab ≠ 0. Or any partial equivalence relation;
- Reflexive and symmetric: The relation R on Z, defined as aRb ↔ "a − b is divisible by at least one of 2 or 3." Or any dependency relation.
Properties definable in first-order logic that an equivalence relation may or may not possess include:
- The number of equivalence classes is finite or infinite;
- The number of equivalence classes equals the (finite) natural number n;
- All equivalence classes have infinite cardinality;
- The number of elements in each equivalence class is the natural number n.
Euclidean relations
- Things which equal the same thing also equal one another.
Theorem. If a relation is Euclidean and reflexive, it is also symmetric and transitive.
- (aRc ∧ bRc) → aRb [a/c] = (aRa ∧ bRa) → aRb [reflexive; erase T∧] = bRa → aRb. Hence R is symmetric.
- (aRc ∧ bRc) → aRb [symmetry] = (aRc ∧ cRb) → aRb. Hence R is transitive.
Hence an equivalence relation is a relation that is Euclidean and reflexive. The Elements mentions neither symmetry nor reflexivity, and Euclid probably would have deemed the reflexivity of equality too obvious to warrant explicit mention.
See also
- Garrett Birkhoff and Saunders Mac Lane, 1999 (1967). Algebra, 3rd ed. p. 35, Th. 19. Chelsea.
- Wallace, D. A. R., 1998. Groups, Rings and Fields. p. 31, Th. 8. Springer-Verlag.
- Dummit, D. S., and Foote, R. M., 2004. Abstract Algebra, 3rd ed. p. 3, Prop. 2. John Wiley & Sons.
- Garrett Birkhoff and Saunders Mac Lane, 1999 (1967). Algebra, 3rd ed. p. 33, Th. 18. Chelsea.
- Rosen (2008), pp. 243-45. Less clear is §10.3 of Bas van Fraassen, 1989. Laws and Symmetry. Oxford Univ. Press.
- Wallace, D. A. R., 1998. Groups, Rings and Fields. Springer-Verlag: 202, Th. 6.
- Dummit, D. S., and Foote, R. M., 2004. Abstract Algebra, 3rd ed. John Wiley & Sons: 114, Prop. 2.
- Bas van Fraassen, 1989. Laws and Symmetry. Oxford Univ. Press: 246.
- Wallace, D. A. R., 1998. Groups, Rings and Fields. Springer-Verlag: 22, Th. 6.
- Wallace, D. A. R., 1998. Groups, Rings and Fields. Springer-Verlag: 24, Th. 7.
- Borceux, F. and Janelidze, G., 2001. Galois theories, Cambridge University Press, ISBN 0-521-80309-8
- Brown, Ronald, 2006. Topology and Groupoids. Booksurge LLC. ISBN 1-4196-2722-8.
- Castellani, E., 2003, "Symmetry and equivalence" in Brading, Katherine, and E. Castellani, eds., Symmetries in Physics: Philosophical Reflections. Cambridge Univ. Press: 422-433.
- Robert Dilworth and Crawley, Peter, 1973. Algebraic Theory of Lattices. Prentice Hall. Chpt. 12 discusses how equivalence relations arise in lattice theory.
- Higgins, P.J., 1971. Categories and groupoids. Van Nostrand. Downloadable since 2005 as a TAC Reprint.
- John Randolph Lucas, 1973. A Treatise on Time and Space. London: Methuen. Section 31.
- Rosen, Joseph (2008) Symmetry Rules: How Science and Nature are Founded on Symmetry. Springer-Verlag. Mostly chpts. 9,10.
- Raymond Wilder (1965) Introduction to the Foundations of Mathematics 2nd edition, Chapter 2-8: Axioms defining equivalence, pp 48–50, John Wiley & Sons. | http://en.wikipedia.org/wiki/Equivalence_relation | 13 |
50 | Protocol Design: How Many Bytes?
The Internet is built on protocols. Protocols take the raw, unstructured capabilities of the network and, using rules and restrictions, determines what and how programs can communicate. Choosing the right rules is important: they determine to a large degree the security, ease of implementation and performance of the protocol. This is the first in a series of articles discussing basic concepts of protocol design. The issue we will start with is how a protocol knows how much data it is going to receive. Protocols are after all mostly about sending and receiving data.
Before we begin, it's worth noting some basic assumptions. Unless noted otherwise, the protocols being discussed all run over a connection-oriented transport, typically TCP. There is an initiating side that starts the connection and a receiving side that accepts it. In many cases these will match the concepts of "client" and "server", and will have different behavior depending on which they are. The connection is assumed to transport a stream of bytes in an ordered, reliable fashion.
Many protocols involve sending chunks of "payload" bytes -- data which is not part of the protocol itself. An email is a structured sequence of bytes, so when an email is sent or received, the receiver side of the protocol needs to know when the email data ends and the protocol begins again. An email that contains a transcript of a POP3 session should not be able to confuse a POP3 client that is downloading it. In addition, commands and messages of the protocol itself are also structured, and the receiving side needs to know when they end and the next message begins.
The first approach that can be used is an end-of-data indicator: some special way of marking when the transfer of the data is over. For example, when sending a payload, the sending side will send a message meaning the data will now be sent, then the actual payload, and finally a message saying there is no more data. One of the Internet's oldest protocols, SMTP, uses this technique to allow clients to send emails to the server. SMTP is documented in RFC 2821, an updated version of RFC 821, which was written in 1982. In the SMTP protocol, a client connects to a server, sends a series of commands indicating from whom and to whom the email is being sent, the body of the email, and then the server deals with delivery of the message.
SMTP follows (or perhaps, given its age, leads) the convention of "line-based" protocols. An SMTP session is composed of a series of lines: a "line" is a sequence of bytes terminated with CRLF, the bytes with the hex values 0x0D and 0x0A. A line can be a command, a response to a command, or part of a message. Each of these lines recreates in its own small way the end-of-data indicator method for finding the end of a message, in this case CRLF. The basic units of the protocol, the lines, can be any length; the receiving side only knows when they are over. As a result, all SMTP servers set an arbitrary length on the length of lines they accept, otherwise a simple connection sending an infinite stream of non-CRLF characters would use up the server's memory. Here is an example of a simple SMTP session between a client and server, taken from the RFC (note that each printed line would be sent with a CRLF after it):
S: 220 foo.com Simple Mail Transfer Service Ready C: EHLO bar.com S: 250-foo.com greets bar.com S: 250-8BITMIME S: 250-SIZE S: 250-DSN S: 250 HELP C: MAIL FROM:<[email protected]> S: 250 OK C: RCPT TO:<[email protected]> S: 250 OK C: DATA S: 354 Start mail input; end with <CRLF>.<CRLF> C: Blah blah blah... C: ...etc. etc. etc. C: . S: 250 OK C: QUIT S: 221 foo.com Service closing transmission channel
Looking at the example carefully, we'll note two more examples of the
end-of-data indicator. There are multiple responses to the EHLO command,
with response code 250, and the last response starts with "
rather than "
250-", to indicate that no more
responses are forthcoming. A more interesting use is the "
command, which is used by the client to send the body of the email. The
email is sent as a series of lines, and a line with a single "
(a period) indicates the end of the email body.
On the face of it this is a reasonable approach, but there are some
serious issues which have led modern protocols to choose other solutions.
Consider what would happen if the email contained a line consisting solely of a
." character -- the server would get confused and think the email
had ended, even though the period was actually part of the email, not an SMTP
command. In order to prevent this, the SMTP protocol specifies that when
sending the contents of a "
DATA" command, any line beginning with
a period must have a period inserted in the beginning. The receiver
checks each incoming line, and if it has a period followed by other
characters, the period is removed, otherwise this is the end of data.
While this does work, it is inelegant and inefficient. A cleaner
solution would be to use length prefixing. Instead of
DATA", the client implementation of an imaginary
improved SMTP protocol would also send the length of the message, for
DATA 1235" for a message that is 1235 bytes
long. The server would then read exactly 1235 bytes, and then revert
back to line-based mode. No quoting would be necessary for the client,
no unquoting for the server. In practice, SMTP has an extension for
sending the size of the message, but it is mostly used to allow the
server to deny overlarge messages, and the server still must use the
period indicator method to detect the end of the message.
HTTP, the protocol used for what is commonly referred to as "the Web", uses length prefixing to indicate the length of a document it is returning in response to a client request (the headers are still sent using CRLF terminated lines). Here is a sample HTTP server response. Notice that the body is separated from the headers by an extra CRLF, and that the body can be any 12 bytes; there is no need for quoting nor any restrictions on their values.
HTTP/1.1 200 OK Content-Type: text/plain Content-Length: 12 0123456789ab
While quite a nice idea, length prefixing has a problem of
its own: it assumes the length of the data is known in advance. This is
certainly a valid assumption when sending the contents of a file, but when
generating dynamic content the length of the data is not known until all the
data is available. In theory it is possible to wait until all the data has
been generated, and then send it along with its length. In practice this is
inefficient, as it slows down the data transfer and requires extra temporary
storage, either in memory or on disk. One solution, used in HTTP 1.0, is to
allow omitting the "
Content-Length" header, and indicating the end of the data
by closing the connection. This solution is also problematic: it makes
it hard to distinguish a failure in the transport (such as a broken TCP connection)
from the end of the data, and it is also inefficient since multiple HTTP requests
to the same server require opening multiple TCP connections.
The updated HTTP 1.1
presented a solution that did not have these problems, a combination
of length prefixing and an end of data indicator. When data is
generated on the fly, it is assumed to be generated as a series of
"chunks", each chunk being at least 1 byte long. An HTTP response can
indicate that is returning a chunked response, in which case it
returns the data as a series of length-prefixed chunks. The end of the
data is indicated by sending a chunk whose length is 0. A chunk's
length is encoded in hexadecimal numerals, and prefixed with CRLF, after which the
chunk is sent. Here is an example HTTP response using chunked encoding
(new lines indicate a CRLF). The "
a" means the next chunk
is 10 bytes long, the "
3" means the next chunk is 3 bytes
long, and the "
0" indicates the end of the response.
HTTP/1.1 200 OK Content-type: text/plain Transfer-encoding: chunked a 0123456789 3 abc 0
End of data indicators versus length prefixing are just one of the issues protocol designers must deal with, but one which influences many other aspects. In future articles we will discuss syntax and structure, state and statelessness, handling multiple requests and more.
- Was SMTP designed for interactive use?
2003-12-01 08:19:56 Jonathan Gennick
- Protocol design.
2003-11-28 08:53:11 Jason
- Protocol design.
2003-12-01 08:58:15 Itamar Shtull-Trauring
- Protocol design.
2006-04-28 22:09:50 hyderabda | http://www.xml.com/pub/a/ws/2003/11/25/protocols.html | 13 |
99 | Science Fair Project Encyclopedia
In mathematics, a number is called an eigenvalue of a matrix if there exists a non-zero vector such that the matrix times the vector is equal to the same vector multiplied by the eigenvalue. This vector is then called the eigenvector associated with the eigenvalue.
The eigenvalues of a matrix or a differential operator often have important physical significance. In classical mechanics the eigenvalues of the governing equations typically correspond to the natural frequencies of vibration (see resonance). In quantum mechanics, the eigenvalues of an operator corresponding to some observable variable are those values of the observable that have non-zero probability of occurring.
The word eigenvalue comes from the German Eigenwert which means "proper or characteristic value."
Formally, we define eigenvectors and eigenvalues as follows. Let A be an n-by-n matrix of real number or complex numbers (see below for generalizations). We say that λ ∈ C is an eigenvalue of A with eigenvector v ∈ Cn if
- Av = λv.
The spectrum of A, denoted σ(A), is the set of all eigenvalues.
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.
Symbolic computations using the characteristic polynomial
The eigenvalues of a matrix are the zeros of its characteristic polynomial. Indeed, if λ is an eigenvalue of A with eigenvector v, then (A - λI)v = 0, where I denotes the identity matrix. This is only possible if the determinant of A - λI vanishes. But the characteric polynomial is defined to be pA(λ) = det(A - λI).
It follows that we can compute all the eigenvalues of a matrix A by solving the equation pA(λ) = 0. The fundamental theorem of algebra says that this equation has at least one solution, so every matrix has at least one eigenvalue.
Main article: eigenvalue algorithm.
The Abel-Ruffini theorem implies that there is no general algorithm for finding the zeros of the characteristic polynomial. Therefore, general eigenvalues algorithms are iterative. The easiest method is power iteration : we choose a random vector v and compute Av, A2v, A3v, ... This sequence will almost always converge to an eigenvector corresponding to the dominant eigenvalue. This algorithm is easy, but not very useful by itself. However, popular methods such as the QR algorithm are based on it.
Let us determine the eigenvalues of the matrix
We first compute the characteristic polynomial of A:
This polynomial factorizes as p(λ) = - (λ - 2)(λ - 1)(λ + 1). Therefore, the eigenvalues of A are 2, 1 and −1.
The (algebraic) multiplicity of an eigenvalue λ of A is the order of λ as a zero of the characteristic polynomial of A; in other words, it is the number of factors t − λ in the characteristic polynomial. An n-by-n matrix has n eigenvalues, counted according to their algebraic multiplicity, because its characteristic polynomial has degree n.
An eigenvalue of algebraic multiplicity 1 is called a simple eigenvalue.
Occasionally, in an article on matrix theory, one may read a statement like
- "the eigenvalues of a matrix A are 4,4,3,3,3,2,2,1,"
meaning that the algebraic multiplicity of 4 is two, of 3 is three, of 2 is two and of 1 is one. This style is used because algebraic multiplicity is the key to many mathematical proofs in matrix theory.
The geometric multiplicity of an eigenvalue λ is the dimension of the associated eigenspace, which consists of all the eigenvectors associated with λ; in other words, it is the nullity of the matrix λI − A. The geometric multiplicity is less than or equal to the algebraic multiplicity.
Consider for example the matrix
It has only one eigenvalue, namely λ = 1. The characteristic polynomial is (λ - 1)2, so this eigenvalue has algebraic multiplicity 2. However, the associated eigenspace is spanned by (1, 0)T, so the geometric multiplicity is only 1.
The spectrum is invariant under similarity transformations: the matrices A and P-1AP have the same eigenvalues for any matrix A and any invertible matrix P. The spectrum is also invariant under transposition: the matrices A and AT have the same eigenvalues.
A matrix is invertible if and only if zero is not an eigenvalue of the matrix.
A matrix is diagonalizable if and only if the algebraic and geometric multiplicities coincide for all its eigenvalues. In particular, an n-by-n matrix is diagonalizable if it has n different eigenvalues.
The location of the spectrum is often restricted if the matrix has a special form:
- All eigenvalues of a Hermitian matrix (A = A*) are real. Furthermore, all eigenvalues of a positive-definite matrix (v*Av > 0 for all vectors v) are positive.
- All eigenvalues of a skew-Hermitian matrix (A = −A*) are purely imaginary.
- All eigenvalues of a unitary matrix (A-1 = A*) have absolute value one.
- The eigenvalues of a triangular matrix are the entries on the main diagonal. This holds a fortiori for diagonal matrices.
Suppose that A is an m-by-n matrix, with m ≤ n, and that B is an n-by-m matrix. Then BA has the same eigenvalues as AB plus m − n eigenvalues equal to zero.
Extensions and generalizations
Eigenvalues of an operator
Suppose we have a linear operator A mapping the vector space V to itself. As in the matrix case, we say that λ ∈ C is an eigenvalue of A if there exists a nonzero v ∈ V such that Av = λv.
Suppose now that A is a bounded linear operator on a Banach space V. We say that λ ∈ C is a spectral value of A if the operator A - λI is not invertible, where I denotes the identity operator. Note that by the closed graph theorem, if a bounded operator has an inverse, the inverse is necessarily bounded. The set of all spectral values is the spectrum of A.
If V is finite dimensional, then the spectrum of A is the same of the set of eigenvalues of A. This follows from the fact that on finite-dimensional spaces injectivity of a linear operator A is equivalent to surjectivity of A. However, an operator on an infinite-dimensional space may have no eigenvalues at all, while it always has spectral values.
Eigenvalues of a matrix with entries from a ring
Suppose that A is a square matrix with entries in a ring R. An element λ ∈ R is called a right eigenvalue of A if there exists a nonzero column vector x such that Ax=λx, or a left eigenvalue if there exists a nonzero row vector y such that yA=yλ.
If R is commutative, the left eigenvalues of A are exactly the right eigenvalues of A and are just called eigenvalues. If R is not commutative, e.g. quaternions, they may be different.
Eigenvalues of a graph
An eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix I - T - 1 / 2AT - 1 / 2, where T is a diagonal matrix holding the degree of each vertex, and in T - 1 / 2, 0 is substituted for 0 - 1 / 2.
- Roger A. Horn and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Eigenvalue | 13 |
109 | Specific impulse (usually abbreviated Isp) is a way to describe the efficiency of rocket and jet engines. It represents the force with respect to the amount of propellant used per unit time. If the "amount" of propellant is given in terms of mass (such as in kilograms), then specific impulse has units of velocity. If it is given in terms of weight (such as in kiloponds), then specific impulse has units of time. The conversion constant between the two versions of specific impulse is g. The higher the specific impulse, the lower the propellant flow rate required for a given thrust, and in the case of a rocket the less propellant is needed for a given delta-v per the Tsiolkovsky rocket equation.
The actual exhaust velocity is the average speed that the exhaust jet actually leaves the vehicle with. The effective exhaust velocity is the exhaust velocity that would be required to produce the same thrust in a vacuum. The two are identical for an ideal rocket working in a vacuum, but are radically different for an air-breathing jet engine that obtains extra thrust by accelerating air. Specific impulse and effective exhaust velocity are proportional.
Specific impulse is a useful value to compare engines, much like miles per gallon or liters per 100 kilometers is used for cars. A propulsion method with a higher specific impulse is more propellant-efficient. Another number that measures the same thing, usually used for air breathing jet engines, is specific fuel consumption. Specific fuel consumption is inversely proportional to specific impulse and effective exhaust velocity.
General considerations
The amount of propellant is normally measured either in units of mass or weight. If mass is used, specific impulse is an impulse per unit mass, which dimensional analysis shows to be a unit of speed, and so specific impulses are often measured in meters per second and are often termed effective exhaust velocity. However, if propellant weight is used instead, an impulse divided by a force (weight) turns out to be a unit of time, and so specific impulses are measured in seconds. These two formulations are both widely used and differ from each other by a factor of g, the dimensioned constant of gravitational acceleration at the surface of the Earth.
Note that the gain of momentum of a rocket (including fuel) per unit time is not equal to the thrust, because the momentum that the fuel has while in the rocket has to be subtracted to the extent that it is used, i.e., the gain of momentum of a rocket per unit time is equal to the thrust, minus the velocity of the rocket multiplied by the amount of fuel used per unit time. (This gain of momentum of the rocket is the negative of the momentum of the exhaust gas.) See also change of impulse of a variable mass.
The higher the specific impulse, the less propellant is needed to produce a given thrust during a given time. In this regard a propellant is more efficient if the specific impulse is higher. This should not be confused with energy efficiency, which can even decrease as specific impulse increases, since propulsion systems that give high specific impulse require high energy to do so.
In addition it is important that thrust and specific impulse not be confused with one another. The specific impulse is a measure of the impulse per unit of propellant that is expended, while thrust is a measure of the momentary or peak force supplied by a particular engine. In many cases, propulsion systems with very high specific impulses—some ion thrusters reach 10,000 seconds—produce low thrusts.
When calculating specific impulse, only propellant that is carried with the vehicle before use is counted. For a chemical rocket the propellant mass therefore would include both fuel and oxidizer; for air-breathing engines only the mass of the fuel is counted, not the mass of air passing through the engine.
|Specific impulse (by weight)||Specific impulse (by mass)||Effective exhaust velocity||Specific fuel consumption|
|SI||=X seconds||=9.8066 X N•s/kg||=9.8066 X m/s||=(101,972/X) g/kN•s|
|Imperial units||=X seconds||=X lbf•s/lb||=32.16 X ft/s||=(3,600/X) lb/lbf•h|
By far the most common unit used for specific impulse today is the second, and this is used both in the SI world as well as where Imperial units are used. Its chief advantages are that its units and numerical value are identical everywhere, and essentially everyone understands it. Nearly all manufacturers quote their engine performance in these units and it is also useful for specifying aircraft engine performance.
The effective exhaust velocity in units of m/s is also in reasonably common usage. For rocket engines it is reasonably intuitive, although for many rocket engines the effective exhaust speed is considerably different from the actual exhaust speed due to, for example, fuel and oxidizer that is dumped overboard after powering turbo-pumps. For air-breathing engines the effective exhaust velocity is not physically meaningful, although it can be used for comparison purposes nevertheless.
The N•s/kg is not uncommonly seen, and is numerically equal to the effective exhaust velocity in m/s (from Newton's second law and the definition of the Newton.)
Another equivalent unit is specific fuel consumption. This has units of g/kN.s or lbf/lb•h and is inversely proportional to specific impulse. Specific fuel consumption is used extensively for describing the performance of air-breathing jet engines.
Specific impulse in seconds
General definition
For all vehicles specific impulse (impulse per unit weight-on-Earth of propellant) in seconds can be defined by the following equation:
- is the thrust obtained from the engine, in newtons (or poundals).
- is the specific impulse measured in seconds.
- is the mass flow rate in kg/s (lb/s), which is negative the time-rate of change of the vehicle's mass since propellant is being expelled.
- is the acceleration at the Earth's surface, in m/s² (or ft/s²).
This Isp in seconds value is somewhat physically meaningful—if an engine's thrust could be adjusted to equal the initial weight of its propellant (measured at one standard gravity), then Isp is the duration the propellant would last.
The advantage of this formulation is that it may be used for rockets, where all the reaction mass is carried on board, as well as aeroplanes, where most of the reaction mass is taken from the atmosphere. In addition, it gives a result that is independent of units used (provided the unit of time used is the second).
In rocketry, where the only reaction mass is the propellant, an equivalent way of calculating the specific impulse in seconds is also frequently used. In this sense, specific impulse is defined as the thrust integrated over time per unit weight-on-Earth of the propellant:
Isp is the specific impulse measured in seconds
is the average exhaust speed along the axis of the engine in (ft/s or m/s)
g0 is the acceleration at the Earth's surface (in ft/s2 or m/s2).
In rockets, due to atmospheric effects, the specific impulse varies with altitude, reaching a maximum in a vacuum. This is because the exhaust velocity isn't simply a function of the chamber pressure, but is a function of the difference between the interior and exterior of the combustion chamber. It is therefore important to note if the specific impulse is vacuum or lower sea level. Values are usually indicated with or near the units of specific impulse (e.g. 'sl', 'vac').
Specific impulse as a speed (effective exhaust velocity)
Because of the geocentric factor of g0 in the equation for specific impulse, many prefer to define the specific impulse of a rocket (in particular) in terms of thrust per unit mass flow of propellant (instead of per unit weight flow). This is an equally valid (and in some ways somewhat simpler) way of defining the effectiveness of a rocket propellant. For a rocket, the specific impulse defined in this way is simply the effective exhaust velocity relative to the rocket, ve. The two definitions of specific impulse are proportional to one another, and related to each other by:
- - is the specific impulse in seconds
- - is the specific impulse measured in m/s, which is the same as the effective exhaust velocity measured in m/s (or ft/s if g is in ft/s2)
- - is the acceleration due to gravity at the Earth's surface, 9.81 m/s² (in English units units 32.2 ft/s²).
This equation is also valid for air-breathing jet engines, but is rarely used in practice.
(Note that different symbols are sometimes used; for example, c is also sometimes seen for exhaust velocity. While the symbol might logically be used for specific impulse in units of N•s/kg, to avoid confusion it is desirable to reserve this for specific impulse measured in seconds.)
It is related to the thrust, or forward force on the rocket by the equation:
- is the propellant mass flow rate, which is the rate of decrease of the vehicle's mass
A rocket must carry all its fuel with it, so the mass of the unburned fuel must be accelerated along with the rocket itself. Minimizing the mass of fuel required to achieve a given push is crucial to building effective rockets. The Tsiolkovsky rocket equation shows that for a rocket with a given empty mass and a given amount of fuel, the total change in velocity it can accomplish is proportional to the effective exhaust velocity.
A spacecraft without propulsion follows an orbit determined by the gravitational field. Deviations from the corresponding velocity pattern (these are called Δv) are achieved by sending exhaust mass in the direction opposite to that of the desired velocity change.
Actual exhaust speed versus effective exhaust speed
Note that effective exhaust velocity and actual exhaust velocity can be significantly different, for example when a rocket is run within the atmosphere, atmospheric pressure on the outside of the engine causes a retarding force that reduces the specific impulse and the effective exhaust velocity goes down, whereas the actual exhaust velocity is largely unaffected. Also, sometimes rocket engines have a separate nozzle for the turbo-pump turbine gas, and then calculating the effective exhaust velocity requires averaging the two mass flows as well as accounting for any atmospheric pressure.
For air-breathing jet engines, particularly turbofans, the actual exhaust velocity and the effective exhaust velocity are different by orders of magnitude. This is because a good deal of additional momentum is obtained by using air as reaction mass. This allows for a better match between the airspeed and the exhaust speed which saves energy/propellant and enormously increases the effective exhaust velocity while reducing the actual exhaust velocity.
Energy efficiency
For rockets and rocket-like engines such as ion-drives a higher implies lower energy efficiency: the power needed to run the engine is simply:
where ve is the actual jet velocity.
whereas from momentum considerations the thrust generated is:
Dividing the power by the thrust to obtain the specific power requirements we get:
Hence the power needed is proportional to the exhaust velocity, with higher velocities needing higher power for the same thrust, causing less energy efficiency per unit thrust.
However, the total energy for a mission depends on total propellant use, as well as how much energy is needed per unit of propellant. For low exhaust velocity with respect to the mission delta-v, enormous amounts of reaction mass is needed. In fact a very low exhaust velocity is not energy efficient at all for this reason; but it turns out that neither are very high exhaust velocities.
Theoretically, for a given delta-v, in space, among all fixed values for the exhaust speed the value is the most energy efficient for a specified (fixed) final mass, see energy in spacecraft propulsion.
However, a variable exhaust speed can be more energy efficient still. For example, if a rocket is accelerated from some positive initial speed using an exhaust speed equal to the speed of the rocket no energy is lost as kinetic energy of reaction mass, since it becomes stationary. (Theoretically, by making this initial speed low and using another method of obtaining this small speed, the energy efficiency approaches 100%, but requires a large initial mass.) In this case the rocket keeps the same momentum, so its speed is inversely proportional to its remaining mass. During such a flight the kinetic energy of the rocket is proportional to its speed and, correspondingly, inversely proportional to its remaining mass. The power needed per unit acceleration is constant throughout the flight; the reaction mass to be expelled per unit time to produce a given acceleration is proportional to the square of the rocket's remaining mass.
Air breathing
Air-breathing engines such as turbojets increase the momentum generated from their propellant by using it to power the acceleration of inert air rearwards. It turns out that the amount of energy needed to generate a particular amount of thrust is inversely proportional to the amount of air propelled rearwards, thus increasing the mass of air (as with a turbofan) both improves energy efficiency as well as .
|Engine||Effective exhaust velocity
|Energy per kg of exhaust
|Turbofan jet engine
(actual V is ~300)
|Bipropellant liquid rocket
|Dual Stage Four Grid Electrostatic Ion Thruster||210,000||21,400||22,500|
- For a more complete list see: Spacecraft propulsion#Table of methods
An example of a specific impulse measured in time is 453 seconds, which is equivalent to an effective exhaust velocity of 4,440 m/s, for the Space Shuttle Main Engines when operating in a vacuum. An air-breathing jet engine typically has a much larger specific impulse than a rocket; for example a turbofan jet engine may have a specific impulse of 6,000 seconds or more at sea level whereas a rocket would be around 200–400 seconds.
An air-breathing engine is thus much more propellant efficient than a rocket engine, because the actual exhaust speed is much lower, the air provides an oxidizer, and air is used as reaction mass. Since the physical exhaust velocity is lower, the kinetic energy the exhaust carries away is lower and thus the jet engine uses far less energy to generate thrust (at subsonic speeds). While the actual exhaust velocity is lower for air-breathing engines, the effective exhaust velocity is very high for jet engines. This is because the effective exhaust velocity calculation essentially assumes that the propellant is providing all the thrust, and hence is not physically meaningful for air-breathing engines; nevertheless, it is useful for comparison with other types of engines.
The highest specific impulse for a chemical propellant ever test-fired in a rocket engine was 542 seconds (5,320 m/s) with a tripropellant of lithium, fluorine, and hydrogen. However, this combination is impractical; see rocket fuel.
Nuclear thermal rocket engines differ from conventional rocket engines in that thrust is created strictly through thermodynamic phenomena, with no chemical reaction. The nuclear rocket typically operates by passing hydrogen gas through a superheated nuclear core. Testing in the 1960s yielded specific impulses of about 850 seconds (8,340 m/s), about twice that of the Space Shuttle engines.
A variety of other non-rocket propulsion methods, such as ion thrusters, give much higher specific impulse but with much lower thrust; for example the Hall effect thruster on the SMART-1 satellite has a specific impulse of 1,640 s (16,100 m/s) but a maximum thrust of only 68 millinewtons. The Variable specific impulse magnetoplasma rocket (VASIMR) engine currently in development will theoretically yield 10,000−300,000 m/s but will require a large electricity source and a great deal of heavy machinery to confine even relatively diffuse plasmas, and so will be unusable for high-thrust applications such as launch from planetary surfaces.
Larger engines
Here are some example numbers for larger jet and rocket engines:
|Engine type||Scenario||SFC in lb/(lbf·h)||SFC in g/(kN·s)||Specific impulse (s)||Effective exhaust velocity (m/s)|
|NK-33 rocket engine||Vacuum||10.9||309||331||3,240|
|SSME rocket engine||Space shuttle vacuum||7.95||225||453||4,423|
|J-58 turbojet||SR-71 at Mach 3.2 (Wet)||1.9||53.8||1,900||18,587|
|Rolls-Royce/Snecma Olympus 593||Concorde Mach 2 cruise (Dry)||1.195||33.8||3,012||29,553|
|CF6-80C2B1F turbofan||Boeing 747-400 cruise||0.605||17.1||5,950||58,400|
|General Electric CF6 turbofan||Sea level||0.307||8.696||11,700||115,000|
Model rocketry
Specific impulse is also used to measure performance in model rocket motors. Following are some of Estes' claimed values for specific impulses for several of their rocket motors: Estes Industries is a large, well-known American seller of model rocket components. The specific impulse for these model rocket motors is much lower than for many other rocket motors because the manufacturer uses black powder propellant and emphasizes safety rather than maximum performance. The burn rate and hence chamber pressure and maximum thrust of model rocket motors is also tightly controlled.
|Engine||Total Impulse (Ns)||Fuel Weight (N)||Specific Impulse (s)|
See also
- Jet engine
- Tsiolkovsky rocket equation
- System-specific impulse
- Specific energy
- Thrust specific fuel consumption - fuel consumption per unit thrust
- Specific thrust - thrust per unit of air for a duct engine
- Heating value
- Energy density
- Delta-v (physics)
- Rocket propellant
- Liquid rocket propellants
- "What is specific impulse?". Qualitative Reasoning Group. Retrieved 22 December 2009.
- Benson, Tom (11 July 2008). "Specific impulse". NASA. Retrieved 22 December 2009.
- Rocket Propulsion Elements, 7th Edition by George P. Sutton, Oscar Biblarz
- Hutchinson, Lee (2013-04-14). "New F-1B rocket engine upgrades Apollo-era design with 1.8M lbs of thrust". ARS technica. Retrieved 2013-04-15. "The measure of a rocket's fuel efficiency is called its specific impulse (abbreviated as "ISP"—or more properly Isp). ... 'Mass specific impulse...describes the thrust-producing efficiency of a chemical reaction and it is most easily thought of as the amount of thrust force produced by each pound (mass) of fuel and oxidizer propellant burned in a unit of time. It is kind of like a measure of miles per gallon (mpg) for rockets.'"
- "Mission Overview". exploreMarsnow. Retrieved 23 December 2009.
- Aerospace Propulsion Systems By Thomas A. Ward
- Note that this limits the speed of the rocket to the maximum exhaust speed.
- ARBIT, H. A., CLAPP, S. D., DICKERSON, R. A., NAGAI, C. K., Combustion characteristics of the fluorine-lithium/hydrogen tripropellant combination. AMERICAN INST OF AERONAUTICS AND ASTRONAUTICS, PROPULSION JOINT SPECIALIST CONFERENCE, 4TH, CLEVELAND, OHIO, Jun 10-14, 1968.
- Astronautix NK33
- Astronautix SSME
- "Data on Large Turbofan Engines". Aircraft Aerodynamics and Design Group. Stanford University. Retrieved 22 December 2009.
- Estes 2011 Catalog www.acsupplyco.com/estes/estes_cat_2011.pdf | http://en.wikipedia.org/wiki/Specific_impulse | 13 |
89 | ||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (April 2013)|
In geometry and physics, the centroid or geometric center of a two-dimensional region is, informally, the point at which a cardboard cut-out of the region could be perfectly balanced on the tip of a pencil (assuming uniform density and a uniform gravitational field). Formally, the centroid of a plane figure or two-dimensional shape is the arithmetic mean ("average") position of all the points in the shape. The definition extends to any object in n-dimensional space: its centroid is the mean position of all the points in all of the coordinate directions.
While in geometry the term barycenter is a synonym for "centroid", in physics "barycenter" may also mean the physical center of mass or the center of gravity, depending on the context. The center of mass (and center of gravity in a uniform gravitational field) is the arithmetic mean of all points weighted by the local density or specific weight. If a physical object has uniform density, then its center of mass is the same as the centroid of its shape.
The geometric centroid of a convex object always lies in the object. A non-convex object might have a centroid that is outside the figure itself. The centroid of a ring or a bowl, for example, lies in the object's central void.
If the centroid is defined, it is a fixed point of all isometries in its symmetry group. In particular, the geometric centroid of an object lies in the intersection of all its hyperplanes of symmetry. The centroid of many figures (regular polygon, regular polyhedron, cylinder, rectangle, rhombus, circle, sphere, ellipse, ellipsoid, superellipse, superellipsoid, etc.) can be determined by this principle alone.
For the same reason, the centroid of an object with translational symmetry is undefined (or lies outside the enclosing space), because a translation has no fixed point.
Locating the centroid
Plumb line method
The centroid of a uniform planar lamina, such as (a) below, may be determined, experimentally, by using a plumbline and a pin to find the center of mass of a thin body of uniform density having the same shape. The body is held by the pin inserted at a point near the body's perimeter, in such a way that it can freely rotate around the pin; and the plumb line is dropped from the pin (b). The position of the plumbline is traced on the body. The experiment is repeated with the pin inserted at a different point of the object. The intersection of the two lines is the centroid of the figure (c).
This method can be extended (in theory) to concave shapes where the centroid lies outside the shape, and to solids (of uniform density), but the positions of the plumb lines need to be recorded by means other than drawing.
Balancing method
For convex two-dimensional shapes, the centroid can be found by balancing the shape on a smaller shape, such as the top of a narrow cylinder. The centroid occurs somewhere within the range of contact between the two shapes. In principle, progressively narrower cylinders can be used to find the centroid to arbitrary accuracy. In practice air currents make this unfeasible. However, by marking the overlap range from multiple balances, one can achieve a considerable level of accuracy.
Of a finite set of points
The centroid of a finite set of points in is
This point minimizes the sum of squared Euclidean distances between itself and each point in the set.
By geometric decomposition
The centroid of a plane figure can be computed by dividing it into a finite number of simpler figures , computing the centroid and area of each part, and then computing
Holes in the figure , overlaps between the parts, or parts that extend outside the figure can all be handled using negative areas . Namely, the measures should be taken with positive and negative signs in such a way that the sum of the signs of for all parts that enclose a given point is 1 if belongs to , and 0 otherwise.
For example, the figure below (a) is easily divided into a square and a triangle, both with positive area; and a circular hole, with negative area (b).
The centroid of each part can be found in any list of centroids of simple shapes (c). Then the centroid of the figure is the weighted average of the three points. The horizontal position of the centroid, from the left edge of the figure is
The vertical position of the centroid is found in the same way.
The same formula holds for any three-dimensional objects, except that each should be the volume of , rather than its area. It also holds for any subset of , for any dimension , with the areas replaced by the -dimensional measures of the parts.
By integral formula
The centroid of a subset X of can also be computed by the integral
where the integrals are taken over the whole space , and g is the characteristic function of the subset, which is 1 inside X and 0 outside it. Note that the denominator is simply the measure of the set X (However, this formula cannot be applied if the set X has zero measure, or if either integral diverges.)
Another formula for the centroid is
where Ck is the kth coordinate of C, and Sk(z) is the measure of the intersection of X with the hyperplane defined by the equation xk = z. Again, the denominator is simply the measure of X.
For a plane figure, in particular, the barycenter coordinates are
where A is the area of the figure X; Sy(x) is the length of the intersection of X with the vertical line at abscissa x; and Sx(y) is the analogous quantity for the swapped axes.
Bounded region
The centroid of a region bounded by the graphs of the continuous functions and such that on the interval , , is given by
where is the area of the region (given by ).
Consider the semicircle bounded by and . Its area is .
The centroid is located at .
Of an L-shaped object
This is a method of determining the centroid of an L-shaped object.
- Divide the shape into two rectangles, as shown in fig 2. Find the centroids of these two rectangles by drawing the diagonals. Draw a line joining the centroids. The centroid of the shape must lie on this line AB.
- Divide the shape into two other rectangles, as shown in fig 3. Find the centroids of these two rectangles by drawing the diagonals. Draw a line joining the centroids. The centroid of the L-shape must lie on this line CD.
- As the centroid of the shape must lie along AB and also along CD, it is obvious that it is at the intersection of these two lines, at O. The point O might not lie inside the L-shaped object.
Of triangle and tetrahedron
The centroid of a triangle is the point of intersection of its medians (the lines joining each vertex with the midpoint of the opposite side). The centroid divides each of the medians in the ratio 2:1, which is to say it is located ⅓ of the perpendicular distance between each side and the opposing point (see figures at right). Its Cartesian coordinates are the means of the coordinates of the three vertices. That is, if the three vertices are , , and , then the centroid is
The centroid is therefore at in barycentric coordinates.
The centroid is also the physical center of mass if the triangle is made from a uniform sheet of material; or if all the mass is concentrated at the three vertices, and evenly divided among them. On the other hand, if the mass is distributed along the triangle's perimeter, with uniform linear density, then the center of mass lies at the Spieker center (the incentre of the medial triangle), which does not (in general) coincide with the geometric centroid of the full triangle.
The area of the triangle is 1.5 times the length of any side times the perpendicular distance from the side to the centroid.
Similar results hold for a tetrahedron: its centroid is the intersection of all line segments that connect each vertex to the centroid of the opposite face. These line segments are divided by the centroid in the ratio 3:1. The result generalizes to any n-dimensional simplex in the obvious way. If the set of vertices of a simplex is , then considering the vertices as vectors, the centroid is
The geometric centroid coincides with the center of mass if the mass is uniformly distributed over the whole simplex, or concentrated at the vertices as n equal masses.
Centroid of polygon
The centroid of a non-self-intersecting closed polygon defined by n vertices (x0,y0), (x1,y1), ..., (xn−1,yn−1) is the point (Cx, Cy), where
and where A is the polygon's signed area,
In these formulas, the vertices are assumed to be numbered in order of their occurrence along the polygon's perimeter, and the vertex ( xn, yn ) is assumed to be the same as ( x0, y0 ). Note that if the points are numbered in clockwise order the area A, computed as above, will have a negative sign; but the centroid coordinates will be correct even in this case.
Centroid of cone or pyramid
The centroid of a cone or pyramid is located on the line segment that connects the apex to the centroid of the base. For a solid cone or pyramid, the centroid is 1/4 the distance from the base to the apex. For a cone or pyramid that is just a shell (hollow) with no base, the centroid is 1/3 the distance from the base plane to the apex.
See also
- List of centroids
- Fréchet mean
- Pappus's centroid theorem
- K-means algorithm
- Triangle center
- Chebyshev center
||This article needs additional citations for verification. (August 2010)|
- Larson, Roland E.; Hostetler, Robert P.; Edwards, Bruce H. (1998). Calculus of a Single Variable (Sixth ed.). Houghton Mifflin Company. pp. 458–460.
- Johnson, Roger A., Advanced Euclidean Geometry, Dover, 2007 (orig. 1929): p. 173, corollary to #272.
- Calculating the area and centroid of a polygon
- Encyclopedia of Triangle Centers by Clark Kimberling. The centroid is indexed as X(2).
- Characteristic Property of Centroid at cut-the-knot
- Barycentric Coordinates at cut-the-knot
- Interactive animations showing Centroid of a triangle and Centroid construction with compass and straightedge
- Experimentally finding the medians and centroid of a triangle at Dynamic Geometry Sketches, an interactive dynamic geometry sketch using the gravity simulator of Cinderella. | http://en.wikipedia.org/wiki/Centroid | 13 |
163 | Resources · Tests
In physics, gravitational waves are ripples in the curvature of spacetime which propagate as a wave, travelling outward from the source. Predicted to exist by Albert Einstein in 1915 on the basis of his theory of general relativity, gravitational waves theoretically transport energy as gravitational radiation. Sources of detectable gravitational waves could possibly include binary star systems composed of white dwarfs, neutron stars, or black holes. The existence of gravitational waves is possibly a consequence of the Lorentz invariance of general relativity since it brings the concept of a limiting speed of propagation of the physical interactions with it. Gravitational waves cannot exist in the Newtonian theory of gravitation, in which physical interactions propagate at infinite speed.
Although gravitational radiation has not been directly detected, there is indirect evidence for its existence. For example, the 1993 Nobel Prize in Physics was awarded for measurements of the Hulse-Taylor binary system which suggests gravitational waves are more than mathematical anomalies. Various gravitational wave detectors exist. However, they have not yet succeeded in detecting such phenomena.
In Einstein's theory of general relativity, gravity is treated as a phenomenon resulting from the curvature of spacetime. This curvature is caused by the presence of mass. Generally, the more mass that is contained within a given volume of space, the greater the curvature of spacetime will be at the boundary of this volume. As objects with mass move around in spacetime, the curvature changes to reflect the changed locations of those objects. In certain circumstances, accelerating objects generate changes in this curvature, which propagate outwards at the speed of light in a wave-like manner. These propagating phenomena are known as gravitational waves.
As a gravitational wave passes a distant observer, that observer will find spacetime distorted by the effects of strain. Distances between free objects increase and decrease rhythmically as the wave passes, at a frequency corresponding to that of the wave. This occurs despite such free objects never being subjected to an unbalanced force. The magnitude of this effect decreases inversely with distance from the source. Inspiralling binary neutron stars are predicted to be a powerful source of gravitational waves as they coalesce, due to the very large acceleration of their masses as they orbit close to one another. However, due to the astronomical distances to these sources the effects when measured on Earth are predicted to be very small, having strains of less than 1 part in 1020. Scientists are attempting to demonstrate the existence of these waves with ever more sensitive detectors. The current most sensitive measurement is about one part in 5×1022 (as of 2012) provided by the LIGO and VIRGO observatories. The lack of detection in these observatories provides an upper limit on the frequency of such powerful sources. A space based observatory, the Laser Interferometer Space Antenna, is currently under development by ESA.
Gravitational waves should penetrate regions of space that electromagnetic waves cannot. It is hypothesized that they will be able to provide observers on Earth with information about black holes and other exotic objects in the distant Universe. Such systems cannot be observed with more traditional means such as optical telescopes and radio telescopes. In particular, gravitational waves could be of interest to cosmologists as they offer a possible way of observing the very early universe. This is not possible with conventional astronomy, since before recombination the universe was opaque to electromagnetic radiation. Precise measurements of gravitational waves will also allow scientists to test the general theory of relativity more thoroughly.
In principle, gravitational waves could exist at any frequency. However, very low frequency waves would be impossible to detect and there is no credible source for detectable waves of very high frequency. Stephen W. Hawking and Werner Israel list different frequency bands for gravitational waves that could be plausibly detected, ranging from 10−7 Hz up to 1011 Hz.
Effects of a passing gravitational wave
The effects of a passing gravitational wave can be visualized by imagining a perfectly flat region of spacetime with a group of motionless test particles lying in a plane (the surface of your screen). As a gravitational wave passes through the particles along a line perpendicular to the plane of the particles (i.e. following your line of vision into the screen), the particles will follow the distortion in spacetime, oscillating in a "cruciform" manner, as shown in the animations. The area enclosed by the test particles does not change and there is no motion along the direction of propagation.
The oscillations depicted here in the animation are exaggerated for the purpose of discussion—in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). However they enable us to visualize the kind of oscillations associated with gravitational waves as produced for example by a pair of masses in a circular orbit. In this case the amplitude of the gravitational wave is a constant, but its plane of polarization changes or rotates at twice the orbital rate and so the time-varying gravitational wave size (or 'periodic spacetime strain') exhibits a variation as shown in the animation. If the orbit is elliptical then the gravitational wave's amplitude also varies with time according Einstein's quadrupole formula.
Like other waves, there are a few useful characteristics describing a gravitational wave:
- Amplitude: Usually denoted , this is the size of the wave — the fraction of stretching or squeezing in the animation. The amplitude shown here is roughly (or 50%). Gravitational waves passing through the Earth are many billions times weaker than this — . Note that this is not the quantity which would be analogous to what is usually called the amplitude of an electromagnetic wave, which would be .
- Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes)
- Wavelength: Usually denoted , this is the distance along the wave between points of maximum stretch or squeeze.
- Speed: This is the speed at which a point on the wave (for example, a point of maximum stretch or squeeze) travels. For gravitational waves with small amplitudes, this is equal to the speed of light, .
The speed, wavelength, and frequency of a gravitational wave are related by the equation c = λ f, just like the equation for a light wave. For example, the animations shown here oscillate roughly once every two seconds. This would correspond to a frequency of 0.5 Hz, and a wavelength of about 600,000 km, or 47 times the diameter of the Earth.
In the example just discussed, we actually assume something special about the wave. We have assumed that the wave is linearly polarized, with a "plus" polarization, written . Polarization of a gravitational wave is just like polarization of a light wave except that the polarizations of a gravitational wave are at 45 degrees, as opposed to 90 degrees. In particular, if we had a "cross"-polarized gravitational wave, , the effect on the test particles would be basically the same, but rotated by 45 degrees, as shown in the second animation. Just as with light polarization, the polarizations of gravitational waves may also be expressed in terms of circularly polarized waves. Gravitational waves are polarized because of the nature of their sources. The polarization of a wave depends on the angle from the source, as we will see in the next section.
Sources of gravitational waves
In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like an expanding or contracting sphere) or cylindrically symmetric (like a spinning disk or sphere). A simple example of this principle is provided by the spinning dumbbell. If the dumbbell spins like wheels on an axle, it will not radiate gravitational waves; if it tumbles end over end like two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. If we imagine an extreme case in which the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off.
Some more detailed examples:
- Two objects orbiting each other in a quasi-Keplerian planar orbit (basically, as a planet would orbit the Sun) will radiate.
- A spinning non-axisymmetric planetoid — say with a large bump or dimple on the equator — will radiate.
- A supernova will radiate except in the unlikely event that the explosion is perfectly symmetric.
- An isolated non-spinning solid object moving at a constant speed will not radiate. This can be regarded as a consequence of the principle of conservation of linear momentum.
- A spinning disk will not radiate. This can be regarded as a consequence of the principle of conservation of angular momentum. However, it will show gravitomagnetic effects.
- A spherically pulsating spherical star (non-zero monopole moment or mass, but zero quadrupole moment) will not radiate, in agreement with Birkhoff's theorem.
More technically, the third time derivative of the quadrupole moment (or the l-th time derivative of the l-th multipole moment) of an isolated system's stress-energy tensor must be nonzero in order for it to emit gravitational radiation. This is analogous to the changing dipole moment of charge or current necessary for electromagnetic radiation.
Power radiated by orbiting bodies
Gravitational waves carry energy away from their sources and, in the case of orbiting bodies, this is associated with an inspiral or decrease in orbit. Imagine for example a simple system of two masses — such as the Earth-Sun system — moving slowly compared to the speed of light in circular orbits. Assume that these two masses orbit each other in a circular orbit in the - plane. To a good approximation, the masses follow simple Keplerian orbits. However, such an orbit represents a changing quadrupole moment. That is, the system will give off gravitational waves.
Suppose that the two masses are and , and they are separated by a distance . The power given off (radiated) by this system is:
where G is the gravitational constant, c is the speed of light in vacuum and where the negative sign means that power is being given off by the system, rather than received. For a system like the Sun and Earth, is about 1.5×1011 m and and are about 2×1030 and 6×1024 kg respectively. In this case, the power is about 200 watts. This is truly tiny compared to the total electromagnetic radiation given off by the Sun (roughly 3.86×1026 watts).
In theory, the loss of energy through gravitational radiation could eventually drop the Earth into the Sun. However, the total energy of the Earth orbiting the Sun (kinetic energy plus gravitational potential energy) is about 1.14×1036 joules of which only 200 joules per second is lost through gravitational radiation, leading to a decay in the orbit by about 1×10−15 meters per day or roughly the diameter of a proton. At this rate, it would take the Earth approximately 1×1013 times more than the current age of the Universe to spiral onto the Sun. This estimate overlooks the decrease in r over time, but the majority of the time the bodies are far apart and only radiating slowly, so the difference is unimportant in this example. In only a few billion years, the Earth is predicted to be swallowed by the Sun in the red giant stage of its life.
A more dramatic example of radiated gravitational energy is represented by two solar mass neutron stars orbiting at a distance from each other of 1.89×108 m (only 0.63 light-seconds apart). [The Sun is 8 light minutes from the Earth.] Plugging their masses into the above equation shows that the gravitational radiation from them would be 1.38×1028 watts, which is about 100 times more than the Sun's electromagnetic radiation.
Orbital decay from gravitational radiation
Gravitational radiation robs the orbiting bodies of energy. It first circularizes their orbits and then gradually shrinks their radius. As the energy of the orbit is reduced, the distance between the bodies decreases, and they rotate more rapidly. The overall angular momentum is reduced however. This reduction corresponds to the angular momentum carried off by gravitational radiation. The rate of decrease of distance between the bodies versus time is given by:
where the variables are the same as in the previous equation.
The orbit decays at a rate proportional to the inverse third power of the radius. When the radius has shrunk to half its initial value, it is shrinking eight times faster than before. By Kepler's Third Law, the new rotation rate at this point will be faster by , or nearly three times the previous orbital frequency. As the radius decreases, the power lost to gravitational radiation increases even more. As can be seen from the previous equation, power radiated varies as the inverse fifth power of the radius, or 32 times more in this case.
If we use the previous values for the Sun and the Earth, we find that the Earth's orbit shrinks by 1.1×10−20 meter per second. This is 3.5×10−13 m per year which is about 1/300 the diameter of a hydrogen atom. The effect of gravitational radiation on the size of the Earth's orbit is negligible over the age of the universe. This is not true for closer orbits.
A more practical example is the orbit of a Sun-like star around a heavy black hole. Our Milky Way has a 4 million solar-mass black hole at its center in Sagittarius A. Such supermassive black holes are being found in the center of almost all galaxies. For this example take a 2 million solar-mass black hole with a solar-mass star orbiting it at a radius of 1.89×1010 m (63 light-seconds). The mass of the black hole will be 4×1036 kg and its gravitational radius will be 6×109 m. The orbital period will be 1,000 seconds, or a little under 17 minutes. The solar-mass star will draw closer to the black hole by 7.4 meters per second or 7.4 km per orbit. A collision will not be long in coming.
Assume that a pair of solar-mass neutron stars are in circular orbits at a distance of 1.89×108 m (189,000 km). This is a little less than 1/7 the diameter of the Sun or 0.63 light-seconds. Their orbital period would be 1,000 seconds. Substituting the new mass and radius in the above formula gives a rate of orbit decrease of 3.7×10−6 m/s or 3.7 mm per orbit. This is 116 meters per year and is not negligible over cosmic time scales.
Suppose instead that these two neutron stars were orbiting at a distance of 1.89×106 m (1890 km). Their period would be 1 second and their orbital velocity would be about 1/50 of the speed of light. Their orbit would now shrink by 3.7 meters per orbit. A collision is imminent. A runaway loss of energy from the orbit results in an ever more rapid decrease in the distance between the stars. They will eventually merge to form a black hole and cease to radiate gravitational waves. This is referred to as the inspiral.
The above equation can not be applied directly for calculating the lifetime of the orbit, because the rate of change in radius depends on the radius itself, and is thus non-constant with time. The lifetime can be computed by integration of this equation (see next section).
Orbital lifetime limits from gravitational radiation
Orbital lifetime is one of the most important properties of gravitational radiation sources. It determines the average number of binary stars in the universe that are close enough to be detected. Short lifetime binaries are strong sources of gravitational radiation but are few in number. Long lifetime binaries are more plentiful but they are weak sources of gravitational waves. LIGO is most sensitive in the frequency band where two neutron stars are about to merge. This time frame is only a few seconds. It takes luck for the detector to see this blink in time out of a million year orbital lifetime. It is predicted that such a merger will only be seen once per decade or so.
The lifetime of an orbit is given by:
where r is the initial distance between the orbiting bodies. This equation can be derived by integrating the previous equation for the rate of radius decrease. It predicts the time for the radius of the orbit to shrink to zero. As the orbital speed becomes a significant fraction of the speed of light, this equation becomes inaccurate. It is useful for inspirals until the last few milliseconds before the merger of the objects.
Substituting the values for the mass of the Sun and Earth as well as the orbital radius gives a very large lifetime of 3.44×1030 seconds or 1.09×1023 years (which is approximately 1015 times larger than the age of the universe). The actual figure would be slightly less than that. The Earth will break apart from tidal forces if it orbits closer than a few radii from the sun. This would form a ring around the Sun and instantly stop the emission of gravitational waves.
If we use a 2 million solar mass black hole with a solar mass star orbiting it at 1.89×1010 meters, we get a lifetime of 6.50×108 seconds or 20.7 years.
Assume that a pair of solar mass neutron stars with a diameter of 10 kilometers are in circular orbits at a distance of 1.89×108 m (189,000 km). Their lifetime is 1.30×1013 seconds or about 414,000 years. Their orbital period will be 1,000 seconds and it could be observed by LISA if they were not too far away. A far greater number of white dwarf binaries exist with orbital periods in this range. White dwarf binaries have masses on the order of our Sun and diameters on the order of our Earth. They cannot get much closer together than 10,000 km before they will merge and cease to radiate gravitational waves. This results in the creation of either a neutron star or a black hole. Until then, their gravitational radiation will be comparable to that of a neutron star binary. LISA is the only gravitational wave experiment which is likely to succeed in detecting such types of binaries.
If the orbit of a neutron star binary has decayed to 1.89×106m (1890 km), its remaining lifetime is 130,000 seconds or about 36 hours. The orbital frequency will vary from 1 revolution per second at the start and 918 revolutions per second when the orbit has shrunk to 20 km at merger. The gravitational radiation emitted will be at twice the orbital frequency. Just before merger, the inspiral can be observed by LIGO if the binary is close enough. LIGO has only a few minutes to observe this merger out of a total orbital lifetime that may have been billions of years. The chance of success with LIGO as initially constructed is quite low despite the large number of such mergers occurring in the universe, because the sensitivity of the instrument does not 'reach' out to enough systems to see events frequently. No mergers have been seen in the few years that initial LIGO has been in operation, and it is thought that a merger should be seen about once per several tens of years of observing time with initial LIGO. The upgraded Advanced LIGO detector, with a ten times greater sensitivity, 'reaches' out 10 times further -- encompassing a volume 1000 times greater, and seeing 1000 times as many candidate sources. Thus, the expectation is that detections will be made at the rate of tens per year.
Wave amplitudes from the Earth–Sun system
We can also think in terms of the amplitude of the wave from a system in circular orbits. Let be the angle between the perpendicular to the plane of the orbit and the line of sight of the observer. Suppose that an observer is outside the system at a distance from its center of mass. If R is much greater than a wavelength, the two polarizations of the wave will be
Here, we use the constant angular velocity of a circular orbit in Newtonian physics:
For example, if the observer is in the - plane then , and , so the polarization is always zero. We also see that the frequency of the wave given off is twice the rotation frequency. If we put in numbers for the Earth-Sun system, we find:
In this case, the minimum distance to find waves is R ≈ 1 light-year, so typical amplitudes will be h ≈ 10−26. That is, a ring of particles would stretch or squeeze by just one part in 1026. This is well under the detectability limit of all conceivable detectors.
Radiation from other sources
Although the waves from the Earth-Sun system are minuscule, astronomers can point to other sources for which the radiation should be substantial. One important example is the Hulse-Taylor binary — a pair of stars, one of which is a pulsar. The characteristics of their orbit can be deduced from the Doppler shifting of radio signals given off by the pulsar. Each of the stars has a mass about 1.4 times that of the Sun and the size of their orbit is about 1/75 of the Earth-Sun orbit. This means the distance between the two stars is just a few times larger than the diameter of our own Sun. The combination of greater masses and smaller separation means that the energy given off by the Hulse-Taylor binary will be far greater than the energy given off by the Earth-Sun system — roughly 1022 times as much.
The information about the orbit can be used to predict just how much energy (and angular momentum) should be given off in the form of gravitational waves. As the energy is carried off, the stars should draw closer to each other. This effect is called an inspiral, and it can be observed in the pulsar's signals. The measurements on the Hulse-Taylor system have been carried out over more than 30 years. It has been shown that the gravitational radiation predicted by general relativity allows these observations to be matched within 0.2 percent. In 1993, Russell Hulse and Joe Taylor were awarded the Nobel Prize in Physics for this work, which was the first indirect evidence for gravitational waves. Unfortunately, the orbital lifetime of this binary system before merger is about 1.84 billion years. This is a substantial fraction of the age of the universe.
Inspirals are very important sources of gravitational waves. Any time two compact objects (white dwarfs, neutron stars, or black holes) are in close orbits, they send out intense gravitational waves. As they spiral closer to each other, these waves become more intense. At some point they should become so intense that direct detection by their effect on objects on Earth or in space is possible. This direct detection is the goal of several large scale experiments.
The only difficulty is that most systems like the Hulse-Taylor binary are so far away. The amplitude of waves given off by the Hulse-Taylor binary as seen on Earth would be roughly h ≈ 10−26. There are some sources, however, that astrophysicists expect to find with much larger amplitudes of h ≈ 10−20. At least eight other binary pulsars have been discovered.
Astrophysics and gravitational waves
|Can gravitational waves be detected experimentally?|
During the past century, astronomy has been revolutionized by the use of new methods for observing the universe. Astronomical observations were originally made using visible light. Galileo Galilei pioneered the use of telescopes to enhance these observations. However, visible light is only a small portion of the electromagnetic spectrum, and not all objects in the distant universe shine strongly in this particular band. More useful information may be found, for example, in radio wavelengths. Using radio telescopes, astronomers have found pulsars, quasars, and other extreme objects which push the limits of our understanding of physics. Observations in the microwave band have opened our eyes to the faint imprints of the Big Bang, a discovery Stephen Hawking called the "greatest discovery of the century, if not all time". Similar advances in observations using gamma rays, x-rays, ultraviolet light, and infrared light have also brought new insights to astronomy. As each of these regions of the spectrum has opened, new discoveries have been made that could not have been made otherwise. Astronomers hope that the same holds true of gravitational waves.
Gravitational waves have two important and unique properties. First, there is no need for any type of matter to be present nearby in order for the waves to be generated by a binary system of uncharged black holes, which would emit no electromagnetic radiation. Second, gravitational waves can pass through any intervening matter without being scattered significantly. Whereas light from distant stars may be blocked out by interstellar dust, for example, gravitational waves will pass through essentially unimpeded. These two features allow gravitational waves to carry information about astronomical phenomena never before observed by humans.
The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10−7 to 105 Hz). An astrophysical source at the high-frequency end of the gravitational-wave spectrum (above 105 Hz and probably 1010 Hz) generates[clarification needed] relic gravitational waves that are theorized to be faint imprints of the Big Bang like the cosmic microwave background (see gravitational wave background). At these high frequencies it is potentially possible that the sources may be "man made" that is, gravitational waves generated and detected in the laboratory.
Energy, momentum, and angular momentum carried by gravitational waves
Waves familiar from other areas of physics such as water waves, sound waves, and electromagnetic waves are able to carry energy, momentum, and angular momentum. By carrying these away from a source, waves are able to rob that source of its energy, linear or angular momentum. Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each other—the angular momentum is radiated away by gravitational waves.
The waves can also carry off linear momentum, a possibility that has some interesting implications for astrophysics. After two supermassive black holes coalesce, emission of linear momentum can produce a "kick" with amplitude as large as 4000 km/s. This is fast enough to eject the coalesced black hole completely from its host galaxy. Even if the kick is too small to eject the black hole completely, it can remove it temporarily from the nucleus of the galaxy, after which it will oscillate about the center, eventually coming to rest. A kicked black hole can also carry a star cluster with it, forming a hyper-compact stellar system. Or it may carry gas, allowing the recoiling black hole to appear temporarily as a "naked quasar". The quasar SDSS J092712.65+294344.0 is believed to contain a recoiling supermassive black hole.
Detecting gravitational waves
Ground-based interferometers
Though the Hulse-Taylor observations were very important, they give only indirect evidence for gravitational waves. A more conclusive observation would be a direct measurement of the effect of a passing gravitational wave, which could also provide more information about the system which generated it. Any such direct detection is complicated by the extraordinarily small effect the waves would produce on a detector. The amplitude of a spherical wave will fall off as the inverse of the distance from the source (the term in the formulas for above). Thus, even waves from extreme systems like merging binary black holes die out to very small amplitude by the time they reach the Earth. Astrophysicists expect that some gravitational waves passing the Earth may be as large as h ≈ 10−20, but generally no bigger.
A simple device to detect the expected wave motion is called a Weber bar — a large, solid bar of metal isolated from outside vibrations. This type of instrument was the first type of gravitational wave detector. Strains in space due to an incident gravitational wave excite the bar's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. With this instrument, Joseph Weber claimed to have detected daily signals of gravitational waves. His results, however, were contested in 1974 by physicists Richard Garwin and David Douglass. Modern forms of the Weber bar are still operated, cryogenically cooled, with superconducting quantum interference devices to detect vibration. Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves.
MiniGRAIL is a spherical gravitational wave antenna using this principle. It is based at Leiden University, consisting of an exactingly machined 1150 kg sphere cryogenically cooled to 20 mK. The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers.
A more sensitive class of detector uses laser interferometry to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). Ground-based interferometers are now operational. Currently, the most sensitive is LIGO — the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana; the other two (in the same vacuum tubes) at the Hanford site in Richland, Washington. Each consists of two light storage arms which are 2 to 4 kilometers in length. These are at 90 degree angles to each other, with the light passing through 1m diameter vacuum tubes running the entire 4 kilometers. A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which an interferometer is most sensitive.
Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 meters. LIGO should be able to detect gravitational waves as small as h ≈ 5*10−20. Upgrades to LIGO and other detectors such as Virgo, GEO 600, and TAMA 300 should increase the sensitivity still further; the next generation of instruments (Advanced LIGO and Advanced Virgo) will be more than ten times more sensitive. Another highly sensitive interferometer (LCGT) is currently in the design phase. A key point is that a tenfold increase in sensitivity (radius of 'reach') increases the volume of space accessible to the instrument by one thousand times. This increases the rate at which detectable signals should be seen from one per tens of years of observation, to tens per year.
Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly; one analogy is to rainfall—the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals at low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these 'stationary' (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other 'non-stationary' noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All these must be taken into account and excluded by analysis before a detection may be considered a true gravitational wave event.
Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to shot noise, as well as artifacts caused by cosmic rays and solar wind.
There are currently two detectors focusing on detection at the higher end of the gravitational wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Two have been fabricated and they are currently expected to be sensitive to periodic spacetime strains of , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of , with an expectation to reach a sensitivity of . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ?g ~ 1010 Hz (10 GHz) and h ~ 10−30-10−31.
Using pulsar timing arrays
Pulsars are rapidly rotating stars. A pulsar emits beams of radio waves which, like lighthouse beams, sweep through the sky as the pulsar rotates. The signal from a pulsar can be detected by radio telescopes as a series of regularly spaced pulses, essentially like the ticks of a clock. Gravitational waves affect the time it takes the pulses to travel from the pulsar to a telescope on Earth. A pulsar timing array uses millisecond pulsars to seek out perturbations due to gravitational waves in measurements of pulse arrival times at a telescope, in other words, to look for deviations in the clock ticks. In particular, pulsar timing arrays can search for a distinct pattern of correlation and anti-correlation between the signals over an array of different pulsars (resulting in the name “pulsar timing array"). Although pulsar pulses travel through space for hundreds or thousands of years to reach us, pulsar timing arrays are sensitive to perturbations in their travel time of much less than a millionth of a second.
Globally there are three active pulsar timing array projects. The North American Nanohertz Gravitational Wave Observatory uses data collected by the Arecibo Radio Telescope and Green Bank Telescope. The Parkes Pulsar Timing Array at the Parkes radio-telescope has been collecting data since March 2005. The European Pulsar Timing Array uses data from the four largest telescopes in Europe: the Lovell Telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg Telescope and the Nancay Radio Telescope. (Upon completion the Sardinia Radio Telescope will be added to the EPTA also.) These three projects have begun collaborating under the title of the International Pulsar Timing Array project.
In some sense, the easiest signals to detect should be constant sources. Supernovae and neutron star or black hole mergers should have larger amplitudes and be more interesting, but the waves generated will be more complicated. The waves given off by a spinning, aspherical neutron star would be "monochromatic"—like a pure tone in acoustics. It would not change very much in amplitude or frequency.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.
Einstein's equations form the fundamental law of general relativity. The curvature of spacetime can be expressed mathematically using the metric tensor — denoted . The metric holds information regarding how distances are measured in the space under consideration. Because the propagation of gravitational waves through space and time change distances, we will need to use this to find the solution to the wave equation.
With some simple assumptions, Einstein's equations can be rewritten to show explicitly that they are wave equations. To begin with, we adopt some coordinate system, like . We define the "flat-space metric" to be the quantity which — in this coordinate system — has the components we would expect for the flat space metric. For example, in these spherical coordinates, we have
This flat-space metric has no physical significance; it is a purely mathematical device necessary for the analysis. Tensor indices are raised and lowered using this "flat-space metric".
This is the crucial field, which will represent the radiation. It is possible (at least in an asymptotically flat spacetime) to choose the coordinates in such a way that this quantity satisfies the "de Donder" gauge conditions (conditions on the coordinates):
where represents the flat-space d'Alembertian operator, and represents the stress-energy tensor plus quadratic terms involving . This is just a wave equation for the field with a source, despite the fact that the source involves terms quadratic in the field itself. That is, it can be shown that solutions to this equation are waves traveling with velocity 1 in these coordinates.
Linear approximation
The equations above are valid everywhere — near a black hole, for instance. However, because of the complicated source term, the solution is generally too difficult to find analytically. We can often assume that space is nearly flat, so the metric is nearly equal to the tensor. In this case, we can neglect terms quadratic in , which means that the field reduces to the usual stress-energy tensor . That is, Einstein's equations become
If we are interested in the field far from a source, however, we can treat the source as a point source; everywhere else, the stress-energy tensor would be zero, so
Now, this is the usual homogeneous wave equation — one for each component of . Solutions to this equation are well known. For a wave moving away from a point source, the radiated part (meaning the part that dies off as far from the source) can always be written in the form , where is just some function. It can be shown that — to a linear approximation — it is always possible to make the field traceless. Now, if we further assume that the source is positioned at , the general solution to the wave equation in spherical coordinates is
where we now see the origin of the two polarizations.
Relation to the source
If we know the details of a source — for instance, the parameters of the orbit of a binary — we can relate the source's motion to the gravitational radiation observed far away. With the relation
Though it is possible to expand the Green's function in tensor spherical harmonics, it is easier to simply use the form
where the positive and negative signs correspond to ingoing and outgoing solutions, respectively. Generally, we are interested in the outgoing solutions, so
If the source is confined to a small region very far away, to an excellent approximation we have:
Now, because we will eventually only be interested in the spatial components of this equation (time components can be set to zero with a coordinate transformation), and we are integrating this quantity — presumably over a region of which there is no boundary — we can put this in a different form. Ignoring divergences with the help of Stokes' theorem and an empty boundary, we can see that
Inserting this into the above equation, we arrive at
Finally, because we have chosen to work in coordinates for which , we know that . With a few simple manipulations, we can use this to prove that
With this relation, the expression for the radiated field is
In the linear case, , the density of mass-energy.
To a very good approximation, the density of a simple binary can be described by a pair of delta-functions, which eliminates the integral. Explicitly, if the masses of the two objects are and , and the positions are and , then
We can use this expression to do the integral above:
Using mass-centered coordinates, and assuming a circular binary, this is
where . Plugging in the known values of , we obtain the expressions given above for the radiation from a simple binary.
See also
- General relativity
- Linearised Einstein field equations
- Graviton astronomy
- LIGO, VIRGO, GEO 600, and TAMA 300 — Gravitational wave detectors
- LISA the proposed Laser Interferometer Space Antenna
- DECIGO "Deci-hertz Interferometer Gravitational wave Observatory", the planned laser interferometric detector in space
- Big Bang Observer (BBO), proposed successor to LISA
- Sticky bead argument, for a physical way to see that gravitational radiation should carry energy
- pp-wave spacetime, for an important class of exact solutions modelling gravitational radiation
- Hawking radiation, for gravitationally induced electromagnetic radiation from black holes
- Spin-flip, a consequence of gravitational wave emission from binary supermassive black holes
- Gravitational field
- Orbital resonance
- Tidal force
- HM Cancri
- Finley, Dave. "Einstein's gravity theory passes toughest test yet: Bizarre binary star system pushes study of relativity to new limits Read more at: http://phys.org/news/2013-04-einstein-gravity-theory-toughest-bizarre.html#jCp". Phys.Org.
- http://www.dpf99.library.ucla.edu/session14/barish1412.pdf The Detection of Gravitational Waves using LIGO, B. Barish
- LIGO Scientific Collaboration; Virgo Collaboration (2012). "Search for Gravitational Waves from Low Mass Compact Binary Coalescence in LIGO's Sixth Science Run and Virgo's Science Runs 2 and 3". Physical Review D 85: 082002. arXiv:1111.7314.
- Krauss, LM; Dodelson, S; Meyer, S (2010). "Primordial Gravitational Waves and Cosmology". Science 328 (5981): 989–992. arXiv:1004.2504. Bibcode:2010Sci...328..989K. doi:10.1126/science.1179541. PMID 20489015.
- Hawking, S. W. and Israel, W., General Relativity: An Einstein Centenary Survey, Cambridge University Press, Cambridge, 1979, 98.
- Landau, L. D. and Lifshitz, E. M., The Classical Theory of Fields. Fourth Revised English Edition, Pergamon Press., 1975, 356–357.
- Einstein, A., "Über Gravitationswellen." Sitzungsberichte, Preussische Akademie der Wissenschaften, 154, (1918).
- Gravitational Radiation
- Relativistic Binary Pulsar B1913+16: Thirty Years of Observations and Analysis
- Crashing Black Holes
- Binary and Millisecond Pulsars
- L. P. Grishchuk (1976), "Primordial Gravitons and the Possibility of Their Observation," Sov. Phys. JETP Lett. 23, p. 293.
- Braginsky, V. B., Rudenko and Valentin, N. Section 7: "Generation of gravitational waves in the laboratory," Physics Report (Review section of Physics Letters), 46, No. 5. 165–200, (1978).
- Li, Fangyu, Baker, R. M L, Jr., and Woods, R. C., "Piezoelectric-Crystal-Resonator High-Frequency Gravitational Wave Generation and Synchro-Resonance Detection," in the proceedings of Space Technology and Applications International Forum (STAIF-2006), edited by M.S. El-Genk, American Institute of Physics Conference Proceedings, Melville NY 813: 2006.
- Merritt, D. et al. (May 2004). "Consequences of Gravitational Wave Recoil". The Astrophysical Journal Letters 607 (1): L9–L12. arXiv:astro-ph/0402057. Bibcode:2004ApJ...607L...9M. doi:10.1086/421551
- Gualandris, A.; Merritt, D. et al. et al. (May 2008). "Ejection of Supermassive Black Holes from Galaxy Cores". The Astrophysical Journal 678 (2): 780–797. arXiv:0708.0771. Bibcode:2008ApJ...678..780G. doi:10.1086/586877
- Merritt, D.; Schnittman, J. D.; Komossa, S. (2009). "Hypercompact Stellar Systems Around Recoiling Supermassive Black Holes". The Astrophysical Journal 699 (2): 1690–1710. arXiv:0809.5046. Bibcode:2009ApJ...699.1690M. doi:10.1088/0004-637X/699/2/1690
- Komossa, S.; Zhou, H.; Lu, H. (May 2008). "A Recoiling Supermassive Black Hole in the Quasar SDSS J092712.65+294344.0?". The Astrophysical Journal 678 (2): L81–L84. arXiv:0804.4585. Bibcode:2008ApJ...678L..81K. doi:10.1086/588656
- For a review of early experiments using Weber bars, see Levine, J. (April 2004). "Early Gravity-Wave Detection Experiments, 1960–1975". Physics in Perspective (Birkhäuser Basel) 6 (1): 42–75. Bibcode:2004PhP.....6...42L. doi:10.1007/s00016-003-0179-6.
- Gravitational Radiation Antenna In Leiden
- de Waard, Arlette; Luciano Gottardi, and Giorgio Frossati (Italy). "Spherical Gravitational Wave Detectors: cooling and quality factor of a small CuAl6% sphere". Marcel Grossman meeting on General Relativity (PDF). Rome
- The idea of using laser interferometry for gravitational wave detection was first mentioned by Gerstenstein and Pustovoit 1963 Sov. Phys.–JETP 16 433. Weber mentioned it in an unpublished laboratory notebook. Rainer Weiss first described in detail a practical solution with an analysis of realistic limitations to the technique in R. Weiss (1972). "Electromagetically Coupled Broadband Gravitational Antenna". Quarterly Progress Report, Research Laboratory of Electronics, MIT 105: 54.
- Thorne, Kip (April 1980). "Multipole expansions of gravitational radiation". Reviews of Modern Physics 52 (2): 299. Bibcode:1980RvMP...52..299T. doi:10.1103/RevModPhys.52.299.
- C. W. Misner, K. S. Thorne, and J. A. Wheeler (1973). Gravitation. W. H. Freeman and Co.
- Chakrabarty, Indrajit, "Gravitational Waves: An Introduction". arXiv:physics/9908041 v1, Aug 21, 1999.
- Landau, L. D. and Lifshitz, E. M., The Classical Theory of Fields (Pergamon Press),(1987).
- Will, Clifford M., The Confrontation between General Relativity and Experiment. Living Rev. Relativity 9 (2006) 3.
- Peter Saulson, "Fundamentals of Interferometric Gravitational Wave Detectors", World Scientific, 1994.
- J. Bicak, W.N. Rudienko, "Gravitacionnyje wolny w OTO i probliema ich obnarużenija", Izdatielstwo Moskovskovo Universitieta, 1987.
- A. Kułak, "Electromagnetic Detectors of Gravitational Radiation", PhD Thesis, Cracow 1980 (In Polish).
- P. Tatrocki, "On intuitive description of graviton detector", www.philica.com .
- P. Tatrocki, "Can the LIGO, VIRGO, GEO600, AIGO, TAMA, LISA detectors really detect?", www.philica.com .
- Berry, Michael, Principles of cosmology and gravitation (Adam Hilger, Philadelphia, 1989). ISBN 0-85274-037-9
- Collins, Harry, Gravity's Shadow: the search for gravitational waves, University of Chicago Press, 2004.
- P. J. E. Peebles, Principles of Physical Cosmology (Princeton University Press, Princeton, 1993). ISBN 0-691-01933-9.
- Wheeler, John Archibald and Ciufolini, Ignazio, Gravitation and Inertia (Princeton University Press, Princeton, 1995). ISBN 0-691-03323-4.
- Woolf, Harry, ed., Some Strangeness in the Proportion (Addison–Wesley, Reading, Massachusetts, 1980). ISBN 0-201-09924-1.
Media related to Gravitational waves at Wikimedia Commons
- The LISA Brownbag – Selection of the most significant e-prints related to LISA science
- Astroparticle.org. To know everything about astroparticle physics, including gravitational waves
- Caltech's Physics 237-2002 Gravitational Waves by Kip Thorne Video plus notes: Graduate level but does not assume knowledge of General Relativity, Tensor Analysis, or Differential Geometry; Part 1: Theory (10 lectures), Part 2: Detection (9 lectures)
- www.astronomycast.com January 14, 2008 Episode 71: Gravitational Waves
- Laser Interferometer Gravitational Wave Observatory. LIGO Laboratory, operated by the California Institute of Technology and the Massachusetts Institute of Technology
- The LIGO Scientific Collaboration
- Einstein's Messengers – The LIGO Movie by NSF
- Home page for Einstein@Home project, a distributed computing project processing raw data from LIGO Laboratory, searching for gravity waves
- The National Center for Supercomputing Applications – a numerical relativity group
- Caltech Relativity Tutorial – A basic introduction to gravitational waves, and astrophysical systems giving off gravitational waves
- Resource Letter GrW-1: Gravitational waves – a list of books, journals and web resources compiled by Joan Centrella for research into gravitational waves
- Mathematical and Physical Perspectives on Gravitational Radiation – written by B F Schutz of the Max Planck Institute explaining the significance and background of some key concepts in gravitational radiation
- Binary BH Merger – estimating the radiated power and merger time of a BH binary using dimensional analysis | http://en.wikipedia.org/wiki/Gravitational_waves | 13 |
83 | Gases respond more dramatically to temperature and pressure than do the other three basic types of matter (liquids, solids and plasma). For gases, temperature and pressure are closely related to volume, and this allows us to predict their behavior under certain conditions. These predictions can explain mundane occurrences, such as the fact that an open can of soda will soon lose its fizz, but they also apply to more dramatic, life-and-death situations.
Ordinary air pressure at sea level is equal to 14.7 pounds per square inch, a quantity referred to as an atmosphere (atm). Because a pound is a unit of force and a kilogram a unit of mass, the metric equivalent is more complex in derivation. A newton (N), or 0.2248 pounds, is the metric unit of force, and a pascal (Pa)—1 newton per square meter—the unit of pressure. Hence, an atmosphere, expressed in metric terms, is 1.013 × 105 Pa.
Regardless of the units you use, however, gases respond to changes in pressure and temperature in a remarkably different way than do solids or liquids. Using a small water sample, say, 0.2642 gal (1 l), an increase in pressure from 1-2 atm will decrease the volume of the water by less than 0.01%. A temperature increase from 32° to 212°F (0 to 100°C) will increase its volume by only 2% The response of a solid to these changes is even less dramatic; however, the reaction of air (a combination of oxygen, nitrogen, and other gases) to changes in pressure and temperature is radically different.
For air, an equivalent temperature increase would result in a volume increase of 37%, and an equivalent pressure increase will decrease the volume by a whopping 50%. Air and other gases also have a boiling point below room temperature, whereas the boiling point for water is higher than room temperature and that of solids is much higher. The reason for this striking difference in response can be explained by comparing all three forms of matter in terms of their overall structure, and in terms of their molecular behavior. (Plasma, a gas-like state found, for instance, in stars and comets' tails, does not exist on Earth, and therefore it will not be included in the comparisons that follow.)
Solids possess a definite volume and a definite shape, and are relatively noncompressible: for instance, if you apply extreme pressure to a steel plate, it will bend, but not much. Liquids have a definite volume, but no definite shape, and tend to be noncompressible. Gases, on the other hand, possess no definite volume or shape, and are compressible.
At the molecular level, particles of solids tend to be definite in their arrangement and close in proximity—indeed, part of what makes a solid "solid," in the everyday meaning of that term, is the fact that its constituent parts are basically immovable. Liquid molecules, too, are close in proximity, though random in arrangement. Gas molecules, too, are random in arrangement, but tend to be more widely spaced than liquid molecules. Solid particles are slow moving, and have a strong attraction to one another, whereas gas particles are fast-moving, and have little or no attraction. (Liquids are moderate in both regards.)
Given these interesting characteristics of gases, it follows that a unique set of parameters—collectively known as the "gas laws"—are needed to describe and predict their behavior. Most of the gas laws were derived during the eighteenth and nineteenth centuries by scientists whose work is commemorated by the association of their names with the laws they discovered. These men include the English chemists Robert Boyle (1627-1691), John Dalton (1766-1844), and William Henry (1774-1836); the French physicists and chemists J. A. C. Charles (1746-1823) and Joseph Gay-Lussac (1778-1850), and the Italian physicist Amedeo Avogadro (1776-1856).
Boyle's law holds that in isothermal conditions (that is, a situation in which temperature is kept constant), an inverse relationship exists between the volume and pressure of a gas. (An inverse relationship is a situation involving two variables, in which one of the two increases in direct proportion to the decrease in the other.) In this case, the greater the pressure, the less the volume and vice versa. Therefore the product of the volume multiplied by the pressure remains constant in all circumstances.
Charles's law also yields a constant, but in this case the temperature and volume are allowed to vary under isobarometric conditions—that is, a situation in which the pressure remains the same. As gas heats up, its volume increases, and when it cools down, its volume reduces accordingly. Hence, Charles established that the ratio of temperature to volume is constant.
By now a pattern should be emerging: both of the aforementioned laws treat one parameter (temperature in Boyle's, pressure in Charles's) as unvarying, while two other factors are treated as variables. Both in turn yield relationships between the two variables: in Boyle's law, pressure and volume are inversely related, whereas in Charles's law, temperature and volume are directly related.
In Gay-Lussac's law, a third parameter, volume, is treated as a constant, and the result is a constant ratio between the variables of pressure and temperature. According to Gay-Lussac's law, the pressure of a gas is directly related to its absolute temperature.
Absolute temperature refers to the Kelvin scale, established by William Thomson, Lord Kelvin (1824-1907). Drawing on Charles's discovery that gas at 0°C (32°F) regularly contracted by about 1/273 of its volume for every Celsius degree drop in temperature, Thomson derived the value of absolute zero (−273.15°C or −459.67°F). Using the Kelvin scale of absolute temperature, Gay-Lussac found that at lower temperatures, the pressure of a gas is lower, while at higher temperatures its pressure is higher. Thus, the ratio of pressure to temperature is a constant.
Gay-Lussac also discovered that the ratio in which gases combine to form compounds can be expressed in whole numbers: for instance, water is composed of one part oxygen and two parts hydrogen. In the language of modern science, this would be expressed as a relationship between molecules and atoms: one molecule of water contains one oxygen atom and two hydrogen atoms.
In the early nineteenth century, however, scientists had yet to recognize a meaningful distinction between atoms and molecules. Avogadro was the first to achieve an understanding of the difference. Intrigued by the whole-number relationship discovered by Gay-Lussac, Avogadro reasoned that one liter of any gas must contain the same number of particles as a liter of another gas. He further maintained that gas consists of particles—which he called molecules—that in turn consist of one or more smaller particles.
In order to discuss the behavior of molecules, it was necessary to establish a large quantity as a basic unit, since molecules themselves are very small. For this purpose, Avogadro established the mole, a unit equal to 6.022137 × 1023 (more than 600 billion trillion) molecules. The term "mole" can be used in the same way we use the word "dozen." Just as "a dozen" can refer to twelve cakes or twelve chickens, so "mole" always describes the same number of molecules.
Just as one liter of water, or one liter of mercury, has a certain mass, a mole of any given substance has its own particular mass, expressed in grams. The mass of one mole of iron, for instance, will always be greater than that of one mole of oxygen. The ratio between them is exactly the same as the ratio of the mass of one iron atom to one oxygen atom. Thus the mole makes if possible to compare the mass of one element or one compound to that of another.
Avogadro's law describes the connection between gas volume and number of moles. According to Avogadro's law, if the volume of gas is increased under isothermal and isobarometric conditions, the number of moles also increases. The ratio between volume and number of moles is therefore a constant.
Once again, it is easy to see how Avogadro's law can be related to the laws discussed earlier, since they each involve two or more of the four parameters: temperature, pressure, volume, and quantity of molecules (that is, number of moles). In fact, all the laws so far described are brought together in what is known as the ideal gas law, sometimes called the combined gas law.
The ideal gas law can be stated as a formula, pV = nRT, where p stands for pressure, V for volume, n for number of moles, and T for temperature. R is known as the universal gas constant, a figure equal to 0.0821 atm · liter/mole · K. (Like most terms in physics, this one is best expressed in metric rather than English units.)
Given the equation pV = nRT and the fact that R is a constant, it is possible to find the value of any one variable—pressure, volume, number of moles, or temperature—as long as one knows the value of the other three. The ideal gas law also makes it possible to discern certain relations: thus if a gas is in a relatively cool state, the product of its pressure and volume is proportionately low; and if heated, its pressure and volume product increases correspondingly. Thus where p1V1 is the product of its initial pressure and its initial volume, T1 its initial temperature,
Five postulates can be applied to gases. Thesemore or less restate the terms of the earlier discussion, in which gases were compared to solidsand liquids; however, now those comparisonscan be seen in light of the gas laws.
First, the size of gas molecules is minusculein comparison to the distance between them, making gas highly compressible. In other words, there is a relatively high proportion of emptyspace between gas molecules.
Second, there is virtually no force attractinggas molecules to one another.
Third, though gas molecules move randomly, frequently colliding with one another, theirnet effect is to create uniform pressure.
Fourth, the elastic nature of the collisionsresults in no net loss of kinetic energy, the energy that an object possesses by virtue of itsmotion. If a stone is dropped from a height, it rapidly builds kinetic energy, but upon hitting anonelastic surface such as pavement, most of thatkinetic energy is transferred to the pavement. In the case of two gas molecules colliding, however, they simply bounce off one another, only to collide with other molecules and so on, with no kinetic energy lost.
Fifth, the kinetic energy of all gas molecules is directly proportional to the absolute temperature of the gas.
Two gas laws describe partial pressure. Dalton's law of partial pressure states that the total pressure of a gas is equal to the sum of its par tial pressures—that is, the pressure exerted by each component of the gas mixture. As noted earlier, air is composed mostly of nitrogen and oxygen. Along with these are small components carbon dioxide and gases collectively known as the rare or noble gases: argon, helium, krypton, neon, radon, and xenon. Hence, the total pressure of a given quantity of air is equal to the sum of the pressures exerted by each of these gases.
Henry's law states that the amount of gas dissolved in a liquid is directly proportional to the partial pressure of the gas above the surface of the solution. This applies only to gases such as oxygen and hydrogen that do not react chemically to liquids. On the other hand, hydrochloric acid will ionize when introduced to water: one or more of its electrons will be removed, and its atoms will convert to ions, which are either positive or negative in charge.
Inside a can or bottle of carbonated soda is carbon dioxide gas (CO2), most of which is dissolved in the drink itself. But some of it is in the space (sometimes referred to as "head space") that makes up the difference between the volume of the soft drink and the volume of the container.
At the bottling plant, the soda manufacturer adds high-pressure carbon dioxide to the head space in order to ensure that more CO2 will be absorbed into the soda itself. This is in accordance with Henry's law: the amount of gas (in this case CO2) dissolved in the liquid (soda) is directly proportional to the partial pressure of the gas above the surface of the solution—that is, the CO2 in the head space. The higher the pressure of the CO2 in the head space, the greater the amount of CO2 in the drink itself; and the greater the CO2 in the drink, the greater the "fizz" of the soda.
Once the container is opened, the pressure in the head space drops dramatically. Once again, Henry's law indicates that this drop in pressure will be reflected by a corresponding drop in the amount of CO2 dissolved in the soda. Over a period of time, the soda will release that gas, and will eventually go "flat."
A fire extinguisher consists of a long cylinder with an operating lever at the top. Inside the cylinder is a tube of carbon dioxide surrounded by a quantity of water, which creates pressure around the CO2 tube. A siphon tube runs vertically along the length of the extinguisher, with one opening near the bottom of the water. The other end opens in a chamber containing a spring mechanism attached to a release valve in the CO2 tube.
The water and the CO2 do not fill the entire cylinder: as with the soda can, there is "head space," an area filled with air. When the operating lever is depressed, it activates the spring mechanism, which pierces the release valve at the top of the CO2 tube. When the valve opens, the CO2 spills out in the "head space," exerting pressure on the water. This high-pressure mixture of water and carbon dioxide goes rushing out of the siphon tube, which was opened when the release valve was depressed. All of this happens, of course, in a fraction of a second—plenty of time to put out the fire.
Aerosol cans are similar in structure to fire extinguishers, though with one important difference. As with the fire extinguisher, an aerosol can includes a nozzle that depresses a spring mechanism, which in turn allows fluid to escape through a tube. But instead of a gas cartridge surrounded by water, most of the can's interior is made up of the product (for instance, deodorant), mixed with a liquid propellant.
The "head space" of the aerosol can is filled with highly pressurized propellant in gas form, and in accordance with Henry's law, a corresponding proportion of this propellant is dissolved in the product itself. When the nozzle is depressed, the pressure of the propellant forces the product out through the nozzle.
A propellant, as its name implies, propels the product itself through the spray nozzle when the latter is depressed. In the past, chlorofluorocarbons (CFCs)—manufactured compounds containing carbon, chlorine, and fluorine atoms—were the most widely used form of propellant. Concerns over the harmful effects of CFCs on the environment, however, has led to the development of alternative propellants, most notably hydrochlorofluorocarbons (HCFCs), CFC-like compounds that also contain hydrogen atoms.
A number of interesting things, some of them unfortunate and some potentially lethal, occur when gases experience a change in temperature. In these instances, it is possible to see the gas laws—particularly Boyle's and Charles's—at work.
There are a number of examples of the disastrous effects that result from an increase in the temperature of a product containing combustible gases, as with natural gas and petroleum-based products. In addition, the pressure on the gases in aerosol cans makes the cans highly explosive—so much so that discarded cans at a city dump may explode on a hot summer day. Yet there are other instances when heating a gas can produce positive effects.
A hot-air balloon, for instance, floats because the air inside it is not as dense than the air outside. By itself, this fact does not depend on any of the gas laws, but rather reflects the concept of buoyancy. However, the way in which the density of the air in the balloon is reduced does indeed reflect the gas laws.
According to Charles's law, heating a gas will increase its volume. Also, as noted in the first and second propositions regarding the behavior of gases, gas molecules are highly nonattractive to one another, and therefore, there is a great deal of space between them. The increase in volume makes that space even greater, leading to a significant difference in density between the air in the balloon and the air outside. As a result, the balloon floats, or becomes buoyant.
Although heating a gas can be beneficial, cooling a gas is not always a wise idea. If someone were to put a bag of potato chips into a freezer, thinking this would preserve their flavor, he would be in for a disappointment. Much of what maintains the flavor of the chips is the pressurization of the bag, which ensures a consistent internal environment in which preservative chemicals, added during the manufacture of the chips, can keep them fresh. Placing the bag in the freezer causes a reduction in pressure, as per Gay-Lussac's law, and the bag ends up a limp version of its earlier self.
Propane tanks and tires offer an example of the pitfalls that may occur by either allowing a gas to heat up or cool down by too much. Because most propane tanks are made according to strict regulations, they are generally safe, but it is not entirely inconceivable that an extremely hot summer day could cause a defective tank to burst. Certainly the laws of physics are there: an increase in temperature leads to an increase in pressure, in accordance with Gay-Lussac's law, and could lead to an explosion.
Because of the connection between heat and pressure, propane trucks on the highways during the summer are subjected to weight tests to ensure that they are not carrying too much of the gas. On the other hand, a drastic reduction in temperature could result in a loss in gas pressure. If a propane tank from Florida were transported by truck during the winter to northern Canada, the pressure would be dramatically reduced by the time it reached its destination.
In operating a car, we experience two examples of gas laws in operation. One of these, common to everyone, is that which makes the car run: the combustion of gases in the engine. The other is, fortunately, a less frequent phenomenon—but it can and does save lives. This is the operation of an air bag, which, though it is partly related to laws of motion, depends also on the behaviors explained in Charles's law.
With regard to the engine, when the driver pushes down on the accelerator, this activates a throttle valve that sprays droplets of gasoline mixed with air into the engine. (Older vehicles used a carburetor to mix the gasoline and air, but most modern cars use fuel-injection, which sprays the air-gas combination without requiring an intermediate step.) The mixture goes into the
While the mixture is still compressed (high pressure, high density), an electric spark plug produces a flash that ignites it. The heat from this controlled explosion increases the volume of air, which forces the piston down into the cylinder. This opens an outlet valve, causing the piston to rise and release exhaust gases.
As the piston moves back down again, an inlet valve opens, bringing another burst of gasoline-air mixture into the chamber. The piston, whose downward stroke closed the inlet valve, now shoots back up, compressing the gas and air to repeat the cycle. The reactions of the gasoline and air are what move the piston, which turns a crankshaft that causes the wheels to rotate.
So much for moving—what about stopping? Most modern cars are equipped with an airbag, which reacts to sudden impact by inflating. This protects the driver and front-seat passenger, who, even if they are wearing seatbelts, may otherwise be thrown against the steering wheel or dashboard..
But an airbag is much more complicated than it seems. In order for it to save lives, it must deploy within 40 milliseconds (0.04 seconds). Not only that, but it has to begin deflating before the body hits it. An airbag does not inflate if a car simply goes over a bump; it only operates in situations when the vehicle experiences extremedeceleration. When this occurs, there is a rapidtransfer of kinetic energy to rest energy, as with the earlier illustration of a stone hitting concrete. And indeed, if you were to smash against a fullyinflated airbag, it would feel like hitting concrete—with all the expected results.
The airbag's sensor contains a steel ballattached to a permanent magnet or a stiff spring. The spring holds it in place through minormishaps in which an airbag would not be warranted—for instance, if a car were simply to be "tapped" by another in a parking lot. But in a case of sudden deceleration, the magnet or springreleases the ball, sending it down a smooth bore. It flips a switch, turning on an electrical circuit.This in turn ignites a pellet of sodium azide, which fills the bag with nitrogen gas.
The events described in the above illustration take place within 40 milliseconds—less time than it takes for your body to come flying forward; and then the airbag has to begin deflating before the body reaches it. At this point, the highly pressurized nitrogen gas molecules begin escaping through vents. Thus as your body hits the bag, the deflation of the latter is moving it in the same direction that your body is going—only much, much more slowly. Two seconds after impact, which is an eternity in terms of the processes involved, the pressure inside the bag has returned to 1 atm.
Beiser, Arthur. Physics, 5th ed. Reading, MA: Addison-Wesley, 1991.
"Chemistry Units: Gas Laws." (Web site). <http://bio.bio.rpi.edu/MS99/ausemaW/chem/gases.hmtl> (February 21, 2001).
Laws of Gases. New York: Arno Press, 1981.
Macaulay, David. The New Way Things Work. Boston: Houghton Mifflin, 1998.
Mebane, Robert C. and Thomas R. Rybolt. Air and Other Gases. Illustrations by Anni Matsick. New York: Twenty-First Century Books, 1995.
"Tutorials—6." <http://www.chemistrycoach.com/tutorials-6.html> (February 21, 2001).
Temperature in relation to absolute zero (−273.15°C or −459.67°F). Its unit is the Kelvin (K), named after William Thomson, Lord Kelvin (1824-1907), who created the scale. The Kelvin and Celsius scales are directly related; hence, Celsius temperatures can be converted to Kelvins (for which neither the word or symbol for "degree" are used) by adding 273.15.
A statement, derived by the Italian physicist Amedeo Avogadro (1776-1856), which holds that as the volume of gas increases under isothermal and isobarometric conditions, the number of molecules (expressed in terms of mole number), increases as well. Thus the ratio of volume to mole number is aconstant.
A statement, derived by English chemist Robert Boyle (1627-1691), which holds that for gases in isothermal conditions, an inverse relationship exists between the volume and pressure of a gas. This means that the greater the pressure, the less the volume and viceversa, and therefore the product of pressure multiplied by volume yields a constantfigure.
A statement, derived by French physicist and chemist J. A. C. Charles (1746-1823), which holds that for gases in isobarometric conditions, the ratio between the volume and temperature of a gas is constant. This means that the greater the temperature, the greater the volume and vice versa.
A statement, derived by the English chemist John Dalton (1766-1844), which holds that the total pressure of a gas is equal to the sum of its partial pressures—that is, the pressure exerted by each component of the gas mixture.
A statement, derived by the French physicist and chemist Joseph Gay-Lussac (1778-1850), which holds that the pressure of a gas is directly related to its absolute temperature. Hence the ratio of pressure to absolute temperature is a constant.
A statement, derived by the English chemist William Henry (1774-836), which holds that the amount of gas dissolved in a liquid is directly proportional to the partial pressure of the gas above the solution. This holds true only forgases, such as hydrogen and oxygen, that are capable of dissolving in water without undergoing ionization.
A proposition, also known as the combined gas law, that draws on all the gas laws. The ideal gas law can be expressed as the formula pV = nRT, where p stands for pressure, V for volume, n for number of moles, and T for temperature. R is known as the universal gas constant, a figure equal to 0.0821 atm · liter/mole · K.
A situation involving two variables, in which one of the two increases in direct proportion to the decrease in the other.
A reaction in which anatom or group of atoms loses one or more electrons. The atoms are then converted toions, which are either wholly positive or negative in charge.
Referring to a situation in which temperature is kept constant.
Referring to a situation in which pressure is kept constant.
A unit equal to 6.022137 × 1023 molecules. | http://www.scienceclarified.com/everyday/Real-Life-Chemistry-Vol-4/Gas-Laws.html | 13 |
88 | Introduction to the complex components
The study of complex numbers and their characteristics has a long history. It all started with questions about how to understand and interpret the solution of the simple quadratic equation .
It was clear that . But it was not clear how to get –1 from something squared.
This problem was intensively discussed in the 16th, 17th, and 18th centuries. As a result, mathematicians proposed a special symbol—the imaginary unit , which is represented by :
L. Euler (1755) introduced the word "complex" (1777) and first used the letter for denoting . Later, C. F. Gauss (1831) introduced the name "imaginary unit" for .
Accordingly, and and the above quadratic equation has two solutions as is expected for a quadratic polynomial:
The imaginary unit was interpreted in a geometrical sense as the point with coordinates in the Cartesian (Euclidean) ,‐plane with the vertical -axis upward and the origin . This geometric interpretation established the following representations of the complex number through two real numbers and as:
where is the distance between points and , and is the angle between the line connecting the points and and the positive -axis direction (the so-called polar representation).
The last formula lead to the basic relations:
which describe the main characteristics of the complex number —the so-called modulus (absolute value) , the real part , the imaginary part , and the argument .
A new era in the theory of complex numbers and functions of complex arguments (analytic functions) arose from the investigations of L. Euler (1727, 1728). In a letter to Goldbach (1731) L. Euler introduced the notation ⅇ for the base of the natural logarithm ⅇ⩵2.71828182…, and he proved that ⅇ is irrational. Later on L. Euler (1740–1748) found a series expansion for , which lead to the famous very basic formula, connecting exponential and trigonometric functions:
This is known as the Euler formula (although it was already derived by R. Cotes in 1714).
The Euler formula allows presentation of the complex number , using polar coordinates in the more compact form:
It also expressed the logarithm of complex numbers through the formula:
Taking into account that the cosine and sine have period , it follows that has period :
Generically, the logarithm function is the multivalued function:
For specifying just one value for the logarithm and one value of the argument φ for a given complex number , the restriction π < φ ≤ π for the argument φ is generally used.
During the 18th and 19th centuries many mathematicians worked on building the theory of the functions of complex variables, which was called the theory of analytic functions. Today this is a widely used theory, not only for the above‐mentioned four complex components (absolute value, argument, real and imaginary parts), but for complimentary characteristics of a complex number such as the conjugate complex number and the signum (sign) . J. R. Argand (1806, 1814) introduced the word "module" for the absolute value, and A. L. Cauchy (1821) was the first to use the word "conjugate" for complex numbers in the modern sense. Later K. Weierstrass (1841) introduced the notation ❘z❘ for the absolute value.
It was shown that the set of complex numbers and the set of real numbers have basic properties in common—they both are fields because they satisfy so-called field axioms. Complex and real numbers exhibit commutativity under addition and multiplication described by the formulas:
Complex and real numbers also have associativity under addition and multiplication described by the formulas:
and distributivity described by the formulas:
(The set of rational numbers also satisfies all of the previous field axioms and is also a field. This set is countable, which means that each rational number can be numerated and placed in a definite position with a corresponding integer number . But the set of rational numbers does not include so-called irrational numbers like or . The set of irrational numbers is much larger and cannot be numerated. The sets of all real and complex numbers form uncountable sets.)
The great success and achievements of the complex number theory stimulated attempts to introduce not only the imaginary unit in the Cartesian (Euclidean) plane , but a similar special third unit in Cartesian (Euclidean) three-dimensional space , which can be used for building a similar theory of (hyper)complex numbers :
Unfortunately, such an attempt fails to fulfill the field axioms. Further generalizations to build the so‐called quaternions and octonions are needed to obtain mathematically interesting and rich objects.
Definitions of complex components
The complex components include six basic characteristics describing complex numbersabsolute value (modulus) , argument (phase) , real part , imaginary part , complex conjugate , and sign function (signum) . It is impossible to define real and imaginary parts of the complex number through other functions or complex characteristics. They are too basic, so their symbols can be described by simple sentences, for example, " gives the real part of the number ," and " gives the imaginary part of the number ."
All other complex components are defined by the following formulas:
Geometrically, the absolute value (or modulus) of a complex number is the Euclidean distance from to the origin, which can also be described by the formula:
Geometrically, the argument of a complex number is the phase angle (in radians) that the line from 0 to makes with the positive real axis. So, the complex number can be presented by the formulas:
Geometrically, the real part of a complex number is the projection of the complex point on the real axis. So, the real part of the complex number can be presented by the formulas:
Geometrically, the imaginary part of a complex number is the projection of complex point on the imaginary axis. So, the imaginary part of the complex number can be presented by the formulas:
Geometrically, the complex conjugate of a complex number is the complex point , which is symmetrical to with respect to the real axis. So, the conjugate value of the complex number can be presented by the formulas:
Geometrically, the sign function (signum) is the complex point that lays on the intersection of the unit circle and the line from 0 to (if ). So, the conjugate value of the complex number can be presented by the formulas:
A quick look at the complex components
Here is a quick look at the graphics for the complex components of the complex components over the complex ‐plane. The empty graphic indicates that the function value is not real.
Connections within the group of complex components and with other function groups
Representations through more general functions
All six complex component functions , , , , , and cannot be easily represented by more generalized functions because most of them are analytic functions of their arguments. But sometimes such representations can be found through Meijer G functions, for example:
Representations through other functions
All six complex components , , , , , and satisfy numerous internal relations of the type , where and are different complex components and is a basic arithmetic operation or (composition of) elementary functions. The most important of these relations are represented in the following table:
Other internal relations between complex components of the type , where and are different complex components that also exist. Some of them are shown here:
Here are some more formulas of the last type:
(here is the Heaviside theta function, also called the unit step function).
The first table can be rewritten using the notations :
The best-known properties and formulas for complex components
Real values for real arguments
For real values of argument , the values of all six complex components , , , , , and are real.
Simple values at zero
The six complex components , , , , , and have the following values for the argument :
is not a uniquely defined number. Depending on the argument of , the limit can take any value in the interval .
Specific values for specialized variable
The six complex components , , , , , and have the following values for some concrete numeric arguments:
Restricted arguments have the following formulas for the six complex components , , , , , and :
The values of complex components , , , , , and at any infinity can be described through the following:
All six complex components , , , , , and are not analytical functions. None of them fulfills the Cauchy–Riemann conditions and as such the value of the derivative depends on the direction. The functions , , , and are real‐analytic functions of the variable (except, maybe, ). The real and the imaginary parts of and are real‐analytic functions of the variable .
Sets of discontinuity
The four complex components , , , and are continuous functions in .
The function has discontinuity at point .
The function is a single‐valued, continuous function on the ‐plane cut along the interval , where it is continuous from above. Its behavior can be described by the following formulas:
All six complex components , , , , , and do not have any periodicity.
Parity and symmetry
All six complex components , , , , , and have mirror symmetry:
The absolute value is an even function. The four complex components , , , and are odd functions. The argument is an odd function for almost all :
The six complex components , , , , , and have the following homogeneity properties:
Some complex components have scale symmetry:
The functions and with real have the following series expansions near point :
The function with real has the following contour integral representation:
The functions and with real have the following limit representations:
The last two representations are sometimes called generalized Padé approximations.
The values of all complex components , , , , , and at the points , , –ⅈ z, , , , , and are given by the following identities:
The values of all complex components , , , , , and at the points , , and are described by the following table:
Some complex components can be easily evaluated in more general cases of the points including symbolic sums and products of , , for example:
The previous tables and formulas can be modified or simplified for particular cases when some variables become real or satisfy special restrictions, for example:
Taking into account that complex components have numerous representations through other complex components and elementary functions such as the logarithm, exponential function, or the inverse tangent function, all of the previous formulas can be transformed into different equivalent forms. Here are some of the resulting formulas for the power function :
Similar identities can be derived for the exponent functions, such as:
Some arithmetical operations involving complex components or elementary functions of complex components are:
The next two tables describe all the complex components applied to all complex components , , , , , and at the points and :
The derivatives of five complex components , , , , and at the real point can be interpreted in a real‐analytic or distributional sense and are given by the following formulas:
where is the Dirac delta function.
It is impossible to make a classical, direction-independent interpretation of these derivatives for complex values of variable because the complex components do not fulfill the Cauchy-Riemann conditions.
The indefinite integrals of some complex components at the real point can be represented by the following formulas:
The definite integrals of some complex components in the complex plane can also be represented through complex components, for example:
Some definite integrals including absolute values can be easily evaluated, for example (in the Hadamard sense of integration, the next identity is correct for all complex values of ):
Fourier integral transforms of the absolute value and signum functions and can be evaluated through generalized functions:
Laplace integral transforms of these functions can be evaluated in a classical sense and have the following values:
The absolute value function for real satisfies the following simple first-order differential equation understandable in a distributional sense:
In a similar manner:
All six complex components , , , , and satisfy numerous inequalities. The best known are so-called triangle inequalities for absolute values:
Some other inequalities can be described by the following formulas:
The six complex components , , , , , and have the set of zeros described by the following formulas:
Applications of complex components
All six complex components are used throughout mathematics, the exact sciences, and engineering. | http://functions.wolfram.com/ComplexComponents/Arg/introductions/ComplexComplements/ShowAll.html | 13 |
95 | |Friday, 24 May 2013|
Written by Andreas Roth
Choosing the right data type
Choosing the right data type is not so easy as you may think. Especially when you planning to write a program which should run without modifications on several systems, platforms and compilers. The are several basic data types like integer, character, real and maybe even strings. In the following I will explain the issue by using C++ asa an example. But the key point also applies to any other programming language.
Data types in C/C++
In C/C++ there are the data types char, short, int and long to represent integers. And each of these data types can be either signed or unsigned. This makes 8 different types in total for integers. For real numbers there are two (or three) different types: float, double and long double. To make the situation more complicate, the different data types do not always have the same value range and representations on different platforms and/or compilers. The question is when to use which type to accomplish the task at hand.
The value range of a data type depends on its width and if the data type is signed or unsigned. A simple signed char can hold values from -128 to +127. For some problems this is enough, but you get into trouble if you want to store the value 200 in a char. Check the value range of the number problem at hand and choose the data type which fits best. But also try to ask yourself the question: Is it possible that even larger/smaller number can arise? If you determined the maximum value range and the next question is: Use a signed or an unsigned version of the data type.
Signed vs. Unsigned
Very often the compiler raises a warning about a comparision of a signed and an unsigned value or an signed to unsigned assignment. You may ignore those warning if you are very certain that it's not a problem. In some cases you may have introduced a serious problem.
To demonstrate the problem of signed and unsigned check out the following example. The task is to generate several sine-values and store them in an array. So you write a simple loop, which puts the calculates sine value into an array:
Try to compile this example (e.g. gcc -o sine sine.c -lm -pedantic) and see for youself that the result is not as you may expect. The program outputs the following lines:
sine[ 0]=0 sine[ 1]=4010 sine[ 2]=7958 sine[ 3]=11780 sine[ 4]=15416 sine[ 5]=18809 sine[ 6]=21905 sine[ 7]=24656 sine[ 8]=27018 sine[ 9]=28954 sine[ 10]=30433 sine[ 11]=31433 sine[ 12]=31936 sine[ 13]=31936 sine[ 14]=31433 sine[ 15]=30433 sine[ 16]=28954 sine[ 17]=27018 sine[ 18]=24656 sine[ 19]=21905 sine[ 20]=18808 sine[ 21]=15415 sine[ 22]=11779 sine[ 23]=7957 sine[ 24]=4010 sine[ 25]=0 sine[ 26]=61526 sine[ 27]=57578 sine[ 28]=53756 sine[ 29]=50120 sine[ 30]=46727 sine[ 31]=43631 sine[ 32]=40880 sine[ 33]=38518 sine[ 34]=36582 ...
As you can see the result is as expected up to n=25, but what happens if n gets larger? That's the result of an mixture of signed and unsigned. The array of sine values is defined as unsigned, but the result of sin() is signed. Note that on this example even the compiler does not complain about this problem. The fix for this problem is very easy (of course); change the type of the sine array from unsigned to signed and the results are as expected:
... sine[ 22]=11779 sine[ 23]=7957 sine[ 24]=4010 sine[ 25]=0 sine[ 26]=-4010 sine[ 27]=-7958 ...
One important point of choosing the right data type is ensuring that your code remains portable. If your using specific data types only available on one compiler or system you have to put in much effort to port it to another compiler or system. Portability can be easily accomplished if you use the data types defined by the standard. These data types must be present on any compiler which pretends to be compatible to the standard. For example the C99 standard defines that there must be header file called stdint.h which defines a type int32_t for a signed 32-Bit integer, uint16_t for a unsigned 16-Bit Integer.
Sometimes you don't care much if to use 32-Bits or 64-Bit to represent a simple number. The best example for this issue are counter variable in loops. In such cases you could simple use a integer type without specifying the exact width. For example to count from 0 to 30 you could simple use an int or unsigned. But you keep the value range of the data type and the signed-unsigned-issue in mind.
Special data types for special situations
Most available libraries introduce the own data type for special use cases. These data types are based on the native data types, but they have one advantage: They improve readability of your program.
The variable n of the type unsigned does not tell the reader that it's gonna be used to store a process identifier. So if you write
instead its much clearer for which purpose the variable is meant. At this point you may say "But this I can also accomplish by choosing the right name for the variable!". Your right, but choosing the right data type can increase the readability. And there's another reason why to use these special data type instead of the native data types. Let's assume on your OS there maximum number of processes is 216, so you choose an unsigned short to represent the process identifier. After several years and several thousand lines of code you increase the maximum number to 232. Now you use an unsigned int for the PIDs and you have change all occurance of unsigned short to unsigned int when it's been used to hold a process identifier. If you just used the type pid_t instead, the change would be quiet simple. Only change the definition of the type pid_t and that's all.
There a several well known type which should be used in certain situations. A very good example is the type size_t in C and C++. It's supposed to be used to measure the size or length of an object or buffer. Many functions of the standard library of C/C++ are using size_t when the size of a buffer needs to be specified. For example strlen returns the length of the given string in characters as size_t. But many people use a unsigned or int to represent the size of a buffer. Using size_t in such cases would make it more easier to understand the function and its parameters. So choose size_t whenever you needs the length or size of an object, buffer or string.
Choosing a data type may also have some influence on the performance of your program. The 64-bit arithmetic operation on a 32-bit machine must be implemented by the compiler (or in the libraries of the compiler) since most 32-bit machines do not have 64-bit arithmetic in the regular instruction set. For example a addition of two 64-bit value must be carried out using several instructions to get the result. A addition of two 32-bit value can be done by a single instruction. For some applications the difference matters, especially if you must perform such an operation very frequently.
Normally the endianness of the values does not matter if your programm does not interact with other program on other platforms. But if you intend to exchange information in a network you have to make sure that every member uses the same representation of your data. On little-endian machines (like AMD or Intel) the integer numbers are store with the least significant bit at the highest memory location. The big-endian machines (like Motorola and some PowerPCs) on the other hand are putting the most-signifacant bit into the highest memory location.
Most data which is transferred over the network is done so in big-endian byte order. Sometimes it's also called network byte order. This ensures that machines can communication with eachother also if there are using very different hardware.
As you have seen to choose the right data type is sometimes not so easy at all. You have to consider which value range is required and if you need negativ value or not. We have seen that you can decrease the effort for porting your code from one system to another if your using portable data types. Two minor points mentioned in this article are the performance conciderations and endianess. After reading this article your should be able to choose the right data type for your problem.
|Copyright © by AR Soft 2005-2013| | http://www.arsoft-online.de/index.php?option=com_content&view=article&id=19:choosing-the-right-data-type&catid=18:programming&Itemid=42 | 13 |
56 | The differentiation is the subfield of Calculus and there are various Application of Differentiation in real world. The differentiation is very important part of Math as it is used in many scientific fields. Differentiation can be defined as the process of finding the Derivatives of the Functions. Differentiation can be used as a tool to calculate or study the rate of change of a quantity with respect to change in some other quantity. The most common example is calculation of velocity and acceleration. Velocity is given by v = dx / dt, where ' x ' is the distance covered by a moving body in time ‘t’.
Similarly acceleration can be given by can be given by a = dv / dt as acceleration is rate of change of velocity with respect to time. Here ' a ' is the acceleration ' v ' is the velocity and ' t ' is time.
Now we will see some other applications of differentiation-
1 ) Normal’s and tangents- Differentiation can be used to find the tangents and normal’s of curve we are studying the different forces acting on a body.
Normal- The perpendicular line to the Tangent of a curve is known as normal.
Slope can be calculated by using dy / dx or Slope= dy / dx.
2 ) Curvilinear motion- As we can calculate the velocity and acceleration of a moving body we can also use differentiation in curvilinear in which object moves along a curved path. Here we express x and y as function of time and it is known as parametric form. Here horizontal component of velocity is given by vx = dx / dt, vertical component of velocity is given by vy = dy / dt.
Magnitude is calculated by v = √ ( v2x + v2y ).
Direction ⊖ of an object can be calculated by tan ⊖v = vy / vx
3 ) Related rates- When two are varying with respect to time and if they are related, then they can be expressed in terms of each other. We will have to differentiate both sides with respect to time d / dt.
4 ) Drawing a curve- We can sketch a curve using differentiation, we can find the Maxima and Minima using given data by finding the first derivative that is dy / dx or y ' and putting it equals to 0 that is y ' = 0 if value of ' x ' is positive then function has local minima and otherwise function has a local maxima. Then we calculate second derivative that is d2 ( y ) / dx that is y' ' and if value of y' ' is greater than 0 or y' ' > 0 then curve has minimum type shape otherwise curve has a maximum type shape.
Slope, length, area for polar curves in Calculus is one of the most interesting and bit complex topic of Calculus. In this section, we will look at areas enclosed by polar curves. Here, we used the...Read More
A secant line is defined as a Straight Line that passes through any two points lying on the curve. Let us assume that a function given as: Y = f(x), where 'Y' is dependent on independent variable 'x'. Thus a function specifies ...Read More
Slope in Derivatives is a simple and very useful concept in Calculus. We will learn here that how to find the Slope in the different type of Differential Equations by the help of derivative. We will go through the several ways for finding the Slope of a derivative and also solves some of the problems related to the evaluation of the slopes in the derivatives.
When we make an angle with the positive direction of x axis in anticlockwise sense is called as a tangent of line or Slope or gradient of line. So, tangent is a trigonometric angle, which is called as a Slope or gradient of the line. The slope of line is generally denoted by m, where
m = tan t, here t is the angle which makes with the positive direction of x axis in ant...Read More
Normal Differentiation is a method of obtaining the rate at which a dependent variable or a dependent output say ‘y’ changes with respect to the change in a independent variable or input. This rate of change is called the derivative of y with respect to x. However physical meaning of normal differentiation says that if a graph is plotted between a dependent var...Read More
In Geometry, a technique that defines basic concept of shape in a plane is called as curve sketching. So, curve sketching Calculus is basically used for solving a mathematical problem about shapes in geometry and for solving the typical mathematical problems like area, maximum, minimum value of certain equation or curve. For sketching a curve, we use following steps - | http://www.tutorcircle.com/application-of-differentiation-t17Jp.html | 13 |
51 | How much room?
In this unit we explore the amount of room we have in our classrooms. We use this to decide if the "ideal" classroom size.
construct a square metre and use it to measure areas
estimate and measure to the nearest square metre
When students can measure areas effectively using non-standard units, they are ready to move to the use of standard units. The motivation for moving to this stage, often follows from experiences where the students have used different non-standard units for the same area and have realised that consistency in the units used would allow for the easier and more accurate communication of area measures.
Students’ measurement experiences must enable them to:
- develop an understanding of the size of a square metre and a square centimetre;
- estimate and measure using square metres and square centimetres.
The usual sequence used in primary school is to introduce the square centimetre and then the square metre.
The square centimetre is introduced first, because it is small enough to measure common objects. The size of the square centimetre can be established by constructing it, for example by cutting 1-centimetre pieces of paper. Most primary classrooms also have a supply of 1-cm cubes that can be used to measure the area of objects. An appreciation of the size of the unit can be built up through lots of experience in measuring everyday objects. The students should be encouraged to develop their own reference for a centimetre, for example, a fingernail or a small button.
As the students become familiar with the size of the square centimetre they should be given many opportunities to estimate before using precise measurement. They can also be given the task of using centimetre-squared paper to create different shapes of the same area.
A square metre can be established using a similar sequence of experiences in constructing the unit and then using it to measure appropriate objects. An important learning point is that one square metre of area can take many shapes whereas a one-metre square must be a square with an area of one square metre.
Today we look at a metre square and use it to estimate and then measure the area of our classroom.
- Draw a metre square on the floor using chalk.
How many students do you think would be able to sit in this space?
How many of these would we need to fit all the students in our class?
- Tell the students that the area is a metre square. Ask the students to estimate the area of the classroom in metre squares. Write estimates on the board.
Why do you think that many?
How did you work it out?
- Ask the students to plan in small groups how they could work out the number of metre squares in the classroom. Tell the students that they need to record their ideas to share with the rest of the class.
- Share ideas for working out the area. Ideas could include:
- Covering the floor area with metres squares drawn in chalk.
- Drawing metre squares across the width of the room and the length and then multiplying the number.
- As a class calculate the area of the classroom in metre squares using one of the approaches suggested. At this stage measure using whole metres only.
- Discuss how to make the measurement of the room more accurate.
What do we do about the bits left over that we haven’t included in our measurement? (join together to make metre squares or measure using metres and centimetres, for example 6 metres and 45 centimetres)
How do we find out the area when the length and width uses metres and centimetres and we want to know the area in square metres?
- In order to calculate the area the measurements must be expressed in a common unit. This means that the students need to be able to express metres and centimetres as metres.
For example, 6 metres and 45 centimetres = 6.45 metres
- Recalculate the area of the room using metres and centimetre measurements.
Over the next 2-3 days we measure the area of other rooms in the school using square metres. We also record how many students are in each classroom. We use this information to draw a plan of the school.
- Tell the class that the principal wants to know if the classrooms are the right size for the numbers of students in them. To make this activity more real you could get the principal to write a letter to the class or come and ask them to undertake this activity.
- With the class list the rooms in the school that need to be measured. You may extend this to the library, hall, playing areas etc.
- Discuss ways of calculating areas when the rooms are not rectangular.
- Allocate the rooms to be measured to small groups of students. If your school has a small number of rooms then groups can measure the same room and compare measurements. Tell them to draw a plan of the room and record the measurements they make. Remind them to ask how many students are in the room.
- Record the information on a class chart or on spreadsheet.
Number of students
44.3 square metres
- Discuss the class chart:
Which classroom has the largest area?
Which room has the smallest area?
Do you think that the classes are in the right rooms? Why? Why not?
Over 1-2 days we use the information gathered to draw a scale map of the school with areas recorded.
- Discuss how to use the information gathered to draw a scale map of the school with areas recorded. Depending on the students’ prior experience with scale maps you could look at some building plans and the scales used. For the school map a scale of 1 cm = 1 metre is reasonable.
- Get each of the small groups of students to draw a scale drawing of the classroom they measured.
- Compile a scale map of the school.
- As a class write statements to accompany the map.
- Share the map and statements with the principal as either an oral or written presentation. | http://www.nzmaths.co.nz/resource/how-much-room | 13 |
50 | Linear Regression in R
In this activity we will explore the relationship between a pair of variables. We will first learn to create a scatter plot for the given data, then we will learn how to craft a "Line of Best Fit" for our plot.
The data set in the table that follows is taken from The Data and Story Library. Researchers measured the heights of 161 children in Kalama, a village in Egypt. The heights were averaged and recorded each month, with the study lasting several years. The data is presented in the table that follows.
|Mean Height versus Age|
|Age in Months||Average Height in Centimeters|
To build our scatter plot, we must first enter the data in R. This is a fairly straightforward task. Because the ages are incremented in months, we can use the command age=18:29, which uses R start:finish syntax to begin a vector at the number 18, then increment by 1 until the number 29 is reached.
We can view the result by typing age at R's prompt.
> age 18 19 20 21 22 23 24 25 26 27 28 29
Entering the average heights is a bit more tedious, but straightforward.
Again, we can view the result by entering height at R's prompt.
> height 76.1 77.0 78.1 78.2 78.8 79.7 79.9 81.1 81.2 81.8 82.8 83.5
We can check that age and height have the same number of elements with R's length command.
> length(age) 12 > length(height) 12
It is now a simple matter to produce a scatterplot of height versus age.
The result of this command is shown in Figure 1.
Figure 1. A scatterplot of height versus age data..
The Line of Best Fit
Note that the data in Figure 1 is approximately linear. As the age increases, the average height increases at an approximately constant rate. One could not fit a single line through each and every data point, but one could imagine a line that is fairly close to each data point, with some of the data points appearing above the line, others below for balance. In this next activity, we will calculate and plot the Line of Best Fit, or the Least Squares Regression Line.
We will use R's lm command to compute a "linear model" that fits the data in Figure 1. The command lm is a very sophisticated command with a host of options (type ?lm to view a full description), but in its simplest form, it is quite easy to usea. The syntax height~age is called a model equation and is a very sophisticated R construct. We are using its most simple form here. The symbol separating "height" and "age" in the syntax height~age is a "tilde." It is located on the key to the immediate left of the the #1 key on your keyboard. You must use the Shift Key to access the "tilde."
Let's examine what is returned in the variable res.
> res Call: lm(formula = height ~ age) Coefficients: (Intercept) age 64.928 0.635
Note the "Coefficients" part of the contents of res. These coefficients are the intercept and slope of the line of best fit. Essentially, we are being told that the equation of the line of best fit is:
height = 0.635 age + 64.928.
Note that this result has the form y = m x + b, where m is the slope and b is the intercept of the line.
It is a simple matter to superimpose the "Line of Best Fit" provided by the contents of the variable res. The command abline will use the data in res to draw the "Line of Best Fit."
The result of this command is the "Line of Best Fit" shown in Figure 2.
Figure 2. The abline command superimposes the "Line of Best Fit" on our previous scatterplot.
The command abline is a versatile tool. You can learn more about this command by entering ?abline and reading the resulting help file.
Now that we have the equation of the line of best fit, we can use the equation to make predictions. Suppose, for example, that we wished to estimate the average height of a child at age 27.5 months. One technique would be to enter this value into the equation of the line of best fit.
height = 0.635 age + 64.928 = 0.635(27.5) + 64.928 = 82.3905.
We can use R as a simple calculator to perform this calculation.
> 0.635*27.5+64.928 82.3905
Thus, the average height at age 27.5 months is 82.3905 centimeters.
We hope you enjoyed this introduction to the principles of Linear Regression in the R system. We encourage you to explore further. Use the commands ?plot, ?lm, and ?abline to learn more about producing scatterplots and performing linear regression. | http://msenux.redwoods.edu/math/R/regression.php | 13 |
62 | | Surface area is a two-dimensional
measure of a three-dimensional geometric figure. It measures the outside
surface, or the combined areas of each face. It can be found by decomposing
the figure into various flat pieces, then we can then easily figure out
the area of each. Surface area is always given in square units.
In the picture above, the first green figure is a cube with side measures of 2 units. There are six faces, each with area 2 X 2 = 4, so the surface area is 6 X 4 = 24 units squared.
The second figure is a purple "brick" or rectangular prism, with measures 1 X 1 X 4. We will need to find the area of each face. The bottom, top, front and back faces are the same, with area 1 X 4 = 4 square units each. Four of these add up to 16 square units. Then the end pieces are squares with area 1 X 1 = 1 square unit each, and both add up to 2 square units. Adding up the areas of all the faces gives us 16 + 2 = 18 square units.
The blue cylinder is made up of two circles and a rectangle (think of the label on a soup can). We can find the area of each circle of radius two by multiplying (2)2(pi) = 12.56 square units. The rectangle "label" is three inches high, but the length of the label as it goes around the entire "can" is given by the circumference of the circle base. The circumference is given by 2(pi)(2) = 12.56 units. Thus the rectangle area is 3(12.56) = 37.60 square units. Here's a puzzler for you, why are the area and the circumference the same number???
The yellow square pyramid can be decomposed into a square base and four congruent triangle faces. The area of the base is 2 X 2 = 4 square units. The area of one triangular face can be found after we identify the height of that face. Since the height generally given is an altitude (from the point at the top to the center of the square base), we will need to use the Pythagorean Theorem to find this height. Using the altitude as a leg, and half the length of the square base as the other leg, we find the hypotenuse (face height) as follows: 52 + 12 = 26. Taking the square root, we find the face height is 5.1 units. Then the triangular face has an area of 1/2(2)(5.1) = 5.1 square units. Since we have four triangular faces, we add four to the base area to get 4(5.1) + 4 = 24.4 square units.
The pink pentagonal prism consists of two pentagons bases and five square sides. The square sides each have area 1 X 1 = 1 square unit, so five of them have an area of 5 square units. The pentagons can be divided into five congruent triangles inside each pentagon. The height of each triangle is .69 units, so their area is (1/2)(1)(.69) = .345 square units. For five triangles, the area would be 1.725 which we double for both bases to 3.45 and add to the five squares for a total surface area of 8.45 square units for the prism. | http://math.youngzones.org/surf_area.html | 13 |
66 | In physics, acceleration is defined as the rate of change of velocity—that is, the change of velocity with time. An object is said to undergo acceleration if it is changing its speed or direction or both. A device used for measuring acceleration is called an accelerometer.
An object traveling in a straight line undergoes acceleration when its speed changes. An object traveling in a uniform circular motion at a constant speed is also said to undergo acceleration because its direction is changing.
The term "acceleration" generally refers to the change in instantaneous velocity. Given that velocity is a vector quantity, acceleration is also a vector quantity. This means that it is defined by properties of magnitude (size or measurability) and direction.
In the strict mathematical sense, acceleration can have a positive or negative value. A negative value for acceleration is commonly called deceleration.
The dimension for acceleration is length/time². In SI units, acceleration is measured in meters per second squared (m•s-²).
Then, for the definition of instantaneous acceleration;
also OR , i.e. Velocity can be thought of as the integral of acceleration with respect to the time. (Note, this can be a definite or indefinite integration).
- is the acceleration vector (as acceleration is a vector, it must be described with both a direction and a has a magnitude)
- v is the velocity function
- x is the position function (also known as displacement or change in position)
- t is time
- d is Leibniz's notation for differentiation
When velocity is plotted against time on a velocity vs. time graph, the acceleration is given by the slope, or the derivative of the graph.
If used with SI standard units (metres per second for velocity; seconds for time) this equation gives a the units of m/(s•s), or m/s² (read as "metres per second per second," or "metres per second squared").
An average acceleration, or acceleration over time, ā can be defined as:
- u is the initial velocity (m/s)
- v is the final velocity (m/s)
- t is the time interval (s) elapsed between the two velocity measurements (also written as "Δt")
Transverse acceleration (perpendicular to velocity), as with any acceleration which is not parallel to the direction of motion, causes change in direction. If it is constant in magnitude and changing in direction with the velocity, we get a circular motion. For this centripetal acceleration we have:
One common unit of acceleration is g, one g (more specifically, gn or g 0) being the standard uniform acceleration of free fall or 9.80665 m/s², caused by the gravitational field of Earth at sea level at about 45.5° latitude.
Jerk is the rate of change of an object's acceleration over time.
As a result of its invariance under the Galilean transformations, acceleration is an absolute quantity in classical mechanics.
Relation to relativity
After defining his theory of special relativity, Albert Einstein realized that forces felt by objects undergoing constant proper acceleration are indistinguishable from those in a gravitational field, and thus defined general relativity that also explained how gravity's effects could be limited by the speed of light.
If you accelerate away from your friend, you could say (given your frame of reference) that it is your friend who is accelerating away from you, although only you feel any force. This is also the basis for the popular Twin paradox, which asks why only one twin ages when moving away from his sibling at near light-speed and then returning, since the aging twin can say that it is the other twin that was moving.
General relativity solved the "why does only one object feel accelerated?" problem which had plagued philosophers and scientists since Newton's time (and caused Newton to endorse absolute space). In special relativity, only inertial frames of reference (non-accelerated frames) can be used and are equivalent; general relativity considers all frames, even accelerated ones, to be equivalent. With changing velocity, accelerated objects exist in warped space (as do those that reside in a gravitational field). Therefore, frames of reference must include a description of their local spacetime curvature to qualify as complete.
An accelerometer inherently measures its own motion (locomotion). It thus differs from a device based on remote sensing. Accelerometers can be used to measure vibration on cars, machines, buildings, process control systems and safety installations. They can also be used to measure seismic activity, inclination, machine vibration, dynamic distance and speed with or without the influence of gravity.
One application for accelerometers is to measure gravity, wherein an accelerometer is specifically configured for use in gravimetry. Such a device is called a gravimeter. Accelerometers are being incorporated into more and more personal electronic devices such as mobile phones, media players, and handheld gaming devices. In particular, more and more smartphones are incorporating accelerometers for step counters, user interface control, and switching between portrait and landscape modes.
Accelerometers are used along with gyroscopes in inertial guidance systems, as well as in many other scientific and engineering systems. One of the most common uses for micro electro-mechanical system (MEMS) accelerometers is in airbag deployment systems for modern automobiles. In this case, the accelerometers are used to detect the rapid negative acceleration of the vehicle to determine when a collision has occurred and the severity of the collision.
Accelerometers are perhaps the simplest MEMS device possible, sometimes consisting of little more than a suspended cantilever beam or proof mass (also known as seismic mass) with some type of deflection sensing and circuitry. MEMS Accelerometers are available in a wide variety of ranges up to thousands of gn's. Single axis, dual axis, and three axis models are available.
The widespread use of accelerometers in the automotive industry has pushed their cost down dramatically.
The Wii Remote for the Nintendo Wii console contains accelerometers for measuring movement and tilt to complement its pointer functionality.
Within the last several years, Nike, Polar and other companies have produced and marketed sports watches for runners that include footpods, containing accelerometers to help determine the speed and distance for the runner wearing the unit.
More recently, Apple Computer and Nike have combined the footpod, with Apple's iPod nano to provide real-time audio feedback to the runner on his/her pace and distance. It is known as the Nike + iPod Sports kit.
A small number of modern notebook computers feature accelerometers to automatically align the screen depending on the direction the device is held. This feature is only relevant in Tablet PCs and smartphones, including the iPhone.
Some laptops' hard drives utilize an accelerometer to detect when falling occurs. When low-g condition is detected, indicating a free-fall and an expected shock, the write current is turned off so that data on other tracks is not corrupted. When the free-fall and shock ends, the data can be rewritten to the desired track, thus negating the effects of the shock.
Camcorders use accelerometers for image stabilization.
Still cameras use accelerometers for anti-blur capturing. The camera holds off snapping the CCD "shutter" when the camera is moving. When the camera is still (if only for a millisecond, as could be the case for vibration), the CCD is "snapped."
Some digital cameras contain accelerometers to determine the orientation of the photo being taken and some also for rotating the current picture when viewing.
The Segway and balancing robots use accelerometers for balance.
- Coordinate vs. physical acceleration
- Speed and Velocity
- Cutnell, John D., and Kenneth W. Johnson. Physics. 7th ed. Hoboken, NJ: John Wiley, 2006. ISBN 0471663158
- Halliday, David, Robert Resnick, and Jearl Walker. Fundamentals of Physics. 7th ed. Hoboken, NJ: John Wiley, 2005. ISBN 0471216437 and ISBN 978-0471216438.
- Kuhn, Karl F. Basic Physics: A Self-Teaching Guide. 2nd ed. Hoboken, NJ: John Wiley, 1996. ISBN 0471134473
- Serway, Raymond A., and John W. Jewett. Physics for Scientists and Engineers. 6th ed. St. Paul, MN: Brooks/Cole, 2004. ISBN 0-534-40842-7.
- Tipler, Paul. Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics. 5th ed. New York: W. H. Freeman, 2004. ISBN 0-7167-0809-4.
All links retrieved August 16, 2012.
- Acceleration and Free Fall - a chapter from an online textbook.
- Science aid: Speed and Motion
- Physics Classroom: Acceleration
- Acceleration Calculator
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Acceleration | 13 |
83 | Science Fair Project Encyclopedia
A space elevator, also known as a space bridge or an orbital elevator, is in a class of spacecraft propulsion technology concepts that is aimed at improving access to space. A space elevator connects a planet's surface with space via a cable. It is also called a geosynchronous orbital tether or a beanstalk (in reference to the fairy tale Jack and the Beanstalk). It is one kind of skyhook.
A space elevator on Earth could permit sending objects and astronauts to space much more often, and at costs only a fraction of those associated with current means. Constructing one would, however, be a vast project, and the elevator would have to be built of a material that could endure tremendous stress while also being light-weight, cost-effective, and manufacturable. A considerable number of other novel engineering problems would also have to be solved to make a space elevator practical. Today's technology does not meet these requirements. A potentially fatal problem, catastrophic cascading fiber breakage, has apparently not been addressed in the literature (see below, "Meteoroids and micrometeorites"). However, optimists say that we could develop the necessary technology by 2008 and finish building the first space elevator by 2018 .
Physics and structure
There are a variety of space elevator designs. Almost every design includes a base station, a cable, climbers, and a counterweight.
The base station designs typically fall into two categories: mobile and stationary. Mobile stations are typically large oceangoing vessels. Stationary platforms are generally located in high-altitude locations.
Mobile platforms have the advantage of being able to maneuver to avoid high winds and storms. While stationary platforms don't have this, they typically have access to cheaper and more reliable power sources, and require a shorter cable. While the decrease in cable length may seem minimal (typically no more than a few kilometers), that can significantly reduce the width of the cable at the center (especially on materials with low tensile strength), and reduce the minimal length of cable reaching beyond geostationary orbit significantly.
The cable must be made of a material with an extremely high tensile strength/density ratio (the limit to which a material can be stretched without irreversibly deforming divided by its density). A space elevator can be made relatively economically if a cable with a density similar to graphite, with a tensile strength of ~65–120 GPa can be produced in bulk at a reasonable price.
By comparison, most steel has a tensile strength of under 1 GPa, and the strongest steels no more than 5 GPa, but steel is heavy. The much lighter material Kevlar has a tensile strength of 2.6–4.1 GPa, while quartz fiber can reach upwards of 20 GPa; the tensile strength of diamond filaments would theoretically be minimally higher.
Carbon nanotubes have exceeded all other materials and appear to have a theoretical tensile strength and density that is well within the desired range for space elevator structures, but the technology to manufacture bulk quantities and fabricate them into a cable has not yet been developed. While theoretically carbon nanotubes can have tensile strengths beyond 120 GPa, in practice the highest tensile strength ever observed in a single-walled tube is 63 GPa, and such tubes averaged breaking between 30 and 50 GPa. Even the strongest fiber made of nanotubes is likely to have notably less strength than its components. Further research on purity and different types of nanotubes will hopefully improve this number.
Most designs call for single-walled carbon nanotubes. While multi-walled nanotubes may attain higher tensile strengths, they have notably higher mass and are consequently poor choices for building the cable. One potential material possibility is to take advantage of the high pressure interlinking properties of carbon nanotubes of a single variety. . While this would cause the tubes to lose some tensile strength by the trading of sp2 bonds (graphite, nanotubes) for sp3 (diamond), it will enable them to be held together in a single fiber by more than the usual, weak Van der Waals force (VdW), and allow manufacturing of a fiber of any length.
The technology to spin regular VdW-bonded yarn from carbon nanotubes is just in its infancy: the first success to spin a long yarn as opposed to pieces of only a few centimeters has been reported only very recently; but the strength/weight ratio was worse than Kevlar due to inconsistent type construction and short tubes being held together by VdW. (March 2004).
Note that at present (March 2004), carbon nanotubes have an approximate price higher than gold at $100/gram, and 20 million grams would be necessary to form even a seed elevator. This price is decreasing rapidly, and large-scale production would reduce it further, but the price of suitable carbon nanotube cable is anyone's guess at this time.
The cable material is an area of fierce worldwide research, the applications of successful material go much further than space elevators; this is good for space elevators because it is likely to push down the price of the cable material further. Other suggested application areas include suspension bridges, new composite materials, better rockets, lighter aircraft etc. etc.
Due to its enormous length a space elevator cable must be carefully designed to carry its own weight as well as the smaller weight of climbers. In an ideal cable the stress would be constant throughout the whole length, which means at each point tapering the cable in proportion to the total weight of the cable below.
Using a model that takes into account the Earth's gravitational and centrifugal forces (and neglecting the smaller Sun and Lunar effects), it is possible to show that the cross-sectional area of the cable as a function of height looks like this:
Where A(r) is the cross-sectional area as a function of distance r from the earth's center.
The constants in the equation are:
- A0 is the cross-sectional area of the cable on the earth's surface.
- ρ is the density of the material the cable is made out of.
- s is the tensile strength of the material.
- ω is the rotational frequency of the earth about its axis, 7.292 × 10-5 radian per second).
- r0 is the distance between the earth's center and the base of the cable. It is approximately the earth's equatorial radius, 6378 km.
- g0 is the acceleration due to gravity at the cable's base, 9.780 m/s².
This equation gives a shape where the cable thickness initially increases rapidly in an exponential fashion, but slows at an altitude a few times the earth's radius, and then gradually becomes parallel when it finally reaches maximum thickness at geosynchronous orbit. The cable thickness then decreases again out from geosynchronous orbit.
Thus the taper of the cable from base to GEO (r = 42,164 km),
Using the density and tensile strength of steel, and assuming a diameter of 1 cm at ground level yields a diameter of several hundred kilometers (!) at geostationary orbit height, showing that steel, and indeed most materials used in present day engineering, are unsuitable for building a space elevator.
The equation shows us that there are four ways of achieving a more reasonable thickness at geostationary orbit:
- Using a lower density material. Not much scope for improvement as the range of densities of most solids that come into question is rather narrow, somewhere between 1000 and 5000 kg/m³
- Using a higher strength material. This is the area where most of the research is focussed. Carbon nanotubes are tens of times stronger than the strongest types of steel, hugely reducing the cable's cross-sectional area at geostationary orbit.
- Increasing the height of a tip of the base station, where the base of cable is attached. The exponential relationship means a small increase in base height results in a large decrease in thickness at geostationary level. Towers of up to 100 km high have been proposed. Not only would a tower of such height reduce the cable mass, it would also avoid exposure of the cable to atmospheric processes.
- Making the cable as thin as possible at its base. It still has to be thick enough to carry a payload however, so the minimum thickness at base level also depends on tensile strength. A cable made of carbon nanotube would typically be just a millimeter wide at the base.
A space elevator cannot be an elevator in the typical sense (with moving cables) due to the need for the cable to be significantly wider at the center than the tips at all times. While designs employing smaller, segmented moving cables along the length of the main cable have been proposed, most cable designs call for the "elevator" to climb up the cable.
Climbers cover a wide range of designs. On elevator designs whose cables are planar ribbons, some have proposed to use pairs of rollers to hold the cable with friction. Other climber designs involve moving arms containing pads of hooks, rollers with retracting hooks, magnetic levitation (unlikely due to the bulky track required on the cable), and numerous other possibilities.
Power is a significant obstacle for climbers. Energy storage densities, barring significant advances in compact nuclear power, are unlikely to ever be able to store the energy for an entire climb in a single climber without making it weigh too much. Some solutions have involved laser or microwave power beaming. Others have gained part of their energy through regenerative braking of down-climbers passing energy to up-climbers as they pass, magnetospheric braking of the cable to dampen oscillations, tropospheric heat differentials in the cable, ionospheric discharge through the cable, and other concepts. The primary power methods (laser and microwave power beaming) have significant problems with both efficiency and heat dissipation on both sides, although with optimistic numbers for future technologies, they are feasible.
Climbers must be paced at optimal timings so as to minimize cable stress, oscillations, and maximize throughput. The weakest point of the cable is near its planetary connection; new climbers can typically be launched so long as there are not multiple climbers in this area at once. An only-up elevator can handle a higher throughput, but has the disadvantage of not allowing energy recapture through regenerative down-climbers. Additionally, as one cannot "leap out of orbit", an only-up elevator would require another method to let payloads/people get rid of their orbital energy, such as conventional rockets. Finally, only-up climbers that don't return to earth must be disposable; if used, they should be modular so that their components can be used for other purposes in geosynchronous orbit. In any case, smaller climbers have the advantage over larger climbers of giving better options for how to pace trips up the cable, but may impose technological limitations.
There have been two dominant methods proposed for dealing with the counterweight need: a heavy object, such as a captured asteroid, positioned past geosynchronous orbit; and extending the cable itself well past geosynchronous orbit. The latter idea has gained more support in recent years due to the simplicity of the task and the ability of a payload that travels to the end of the counterweight-cable to be flung off as far as Saturn (and farther using gravitational assists from planets).
Launching into outer space
As a payload is lifted up a space elevator, it gains not only altitude but angular momentum as well. This angular momentum is taken from Earth's own rotation. As the payload climbs it "drags" on the cable, causing it to tilt very slightly to the west (lagging behind slightly on the Earth's rotation). The horizontal component of the tension in the cable applies a tangental pull on the payload, accelerating it eastward. Conversely, the cable pulls westward on Earth's surface, insignificantly slowing it. The opposite process occurs for payloads descending the elevator, tilting the cable eastwards and very slightly increasing Earth's rotation speed. In both cases the centrifugal force acting on the cable's counterweight causes it to return to a vertical orientation, transferring momentum between Earth and payload in the process.
We can determine the velocities that might be attained at the end of Pearson's 144,000 km tower (or cable). At the end of the tower, the tangential velocity is 10.93 kilometers per second which is more than enough to escape Earth's gravitational field and send probes as far out as Saturn. If an object were allowed to slide freely along the upper part of the tower a velocity high enough to escape the solar system entirely would be attained. This is accomplished by trading off overall angular momentum of the tower (and the Earth) for velocity of the launched object, in much the same way one snaps a towel or throws a lacrosse ball.
For higher velocities, the cargo can be electromagnetically accelerated, or the cable could be extended, although that would require additional strength in the cable.
A space elevator could also be constructed on some of the other planets, asteroids and moons.
A Martian tether could be much shorter than one on Earth. Mars' gravity is 38% of Earth's, while it rotates around its axis in about the same time as Earth. Because of this, Martian areostationary orbit is much closer to the surface, and hence the elevator would be much shorter. Exotic materials might not be required to construct such an elevator.
A lunar space elevator would need to be very long—more than twice the length of an Earth elevator, but due to the low gravity of the moon, can be made of existing engineering materials.
Rapidly spinning asteroids or moons could use cables to eject materials in order to move the materials to convenient points, such as Earth orbits; or conversely, to eject materials in order to send the bulk of the mass of the asteroid or moon to Earth orbit or a Lagrangian point. This was suggested by Russell Johnston in the 1980s. Freeman Dyson has suggested using such smaller systems as power generators at points distant from the Sun where solar power is uneconomical.
The construction of a space elevator would be a vast project, requiring advances in engineering and physical technology. NASA has identified "Five Key Technologies for Future Space Elevator Development":
- Material for cable (e.g. carbon nanotube and nanotechnology) and tower
- Tether deployment and control
- Tall tower construction
- Electromagnetic propulsion (e.g. magnetic levitation)
- Space infrastructure and the development of space industry and economy
Two different ways to deploy a space elevator have been proposed.
One early plan involved lifting the entire mass of the elevator into geosynchronous orbit, and simultaneously lowering one cable downwards towards the Earth's surface while another cable is deployed upwards directly away from the Earth's surface. Tidal forces (gravity and centrifugal force) would naturally pull the cables directly towards and directly away from the Earth and keeps the elevator balanced around geosynchronous orbit.
However, this approach requires lifting hundreds or even thousands of tons on conventional rockets. This would be very expensive.
Brad Edwards' proposal
Brad Edwards , Director of Research for the Institute for Scientific Research (ISR), based in Fairmont, West Virginia, is a leading authority on the space elevator concept. He proposes that a single hairlike 20 short ton (18 metric ton) 'seed' cable be deployed in the traditional way, giving a very lightweight elevator with very little lifting capacity.
Then, progressively heavier cables would be pulled up from the ground along it, repeatedly strengthening it until the elevator reaches the required mass and strength. This is much the same technique used to build suspension bridges.
Although 20 short tons for a seed cable may sound a lot, it would actually be incredibly lightweight—the proposed average mass is about 0.2 kilogram per kilometer. (The pair of copper telephone wires running to your house weigh about 4 kg/km). Twenty tons is slightly less than a Russian geosynchronous communication satellite.
Failure modes and safety issues
As with any structure there are a number of ways in which things could go wrong. A space elevator would present a considerable navigational hazard, both to aircraft and spacecraft. Aircraft could be dealt with by means of simple air-traffic control restrictions, but spacecraft are a more difficult problem.
If nothing were done, essentially all satellites with perigees below the top of the elevator will eventually collide. Twice per day, each orbital plane intersects the elevator, as the rotation of the Earth swings the cable around the equator. Usually the satellite and the cable will not line up. However, eventually, except for synchronized orbits, the elevator and satellite will be in the same place at the same time and there will be a disaster.
Most active satellites are capable of some degree of orbital maneuvering and could avoid these predictable collisions, but inactive satellites and other orbiting debris would need to be either preemptively removed from orbit by "garbage collectors" or would need to be closely watched and nudged whenever their orbit approaches the elevator. The impulses required would be small, and need be applied only very infrequently; a laser broom system may be sufficient to this task. In addition, Brad Edward's design actually allows the elevator to move out of the way, because the fixing point is at sea and mobile. Further, transverse oscillations of the cable could be controlled so as to ensure that the cable avoids satellites on known paths -- the required amplitudes are modest, relative to the cable length.
Meteoroids and micrometeorites
Meteoroids present a more difficult problem, since they would not be predictable and much less time would be available to detect and track them approaching Earth. It is likely that a space elevator would still suffer impacts of some kind, no matter how carefully it is guarded. However, most space elevator designs call for the use of multiple parallel cables separated from each other by struts, with sufficient margin of safety that severing just one or two strands still allows the surviving strands to hold the elevator's entire weight while repairs are performed. If the strands are properly arranged, no single impact would be able to sever enough of them to overwhelm the surviving strands.
Far worse than meteoroids are micrometeorites; tiny high-speed particles found in high concentrations at certain altitudes. Avoiding micrometeorites is essentially impossible, and they will ensure that strands of the elevator are continuously being cut. Most methods designed to deal with this involve a design similar to a hoytether or to a network of strands in a cylindrical or planar arrangement with two or more helical strands. Creating the cable as a mesh instead of a ribbon helps prevent collateral damage from each micrometeorite impact.
It is not enough, however, that other fibers be able to take over the load of a failed strand — the system must also survive the immediate, dynamical effects of fiber failure, which generates projectiles aimed at the cable itself. For example, if the cable has a working stress of 50 GPa and a Young's modulus of 1000 GPa, its strain will be 0.05 and its stored elastic energy will be 1/2 × 0.05 × 50 GPa = 1.25×109 joules per cubic meter. Breaking a fiber will result in a pair of de-tensioning waves moving apart at the speed of sound in the fiber, with the fiber segments behind each wave moving at over 1,000 m/s (more than the muzzle velocity of an M16 rifle). Unless these fast-moving projectiles can be stopped safely, they will break yet other fibers, initiating a failure cascade capable of severing the cable. The challenge of preventing fiber breakage from initiating a catastrophic failure cascade seems to be unaddressed in the current (January, 2005) literature on terrestrial space elevators. Problems of this sort would be easier to solve in lower-tension applications (e.g., lunar elevators).
Corrosion is a major risk to any thinly built tether (which most designs call for). In the upper atmosphere, atomic oxygen steadily eats away at most materials. A tether will consequently need to either be made from a corrosion-resistant material or have a corrosion-resistant coating, adding to weight. Gold and platinum have been shown to be practically immune to atomic oxygen; several far more common materials such as aluminum are damaged very slowly and could be repaired as needed.
In the atmosphere, the risk factors of wind and lightning come into play. The basic mitigation is location. As long as the tether's anchor remains within two degrees of the equator, it will remain in the quiet zone between the Earth's Hadley cells, where there is relatively little violent weather. Remaining storms could be avoided by moving a floating anchor platform. The lightning risk can be minimized by using a nonconductive fiber with a water-resistant coating to help prevent a conductive buildup from forming. The wind risk can be minimized by use of a fiber with a small cross-sectional area that can rotate with the wind to reduce resistance.
Sabotage is a relatively unquantifiable problem. Elevators are probably less susceptible than suspension bridges carrying mass vehicular traffic, of which there are many worldwide, none of which have been destroyed. Nonetheless there are few more spectacular possible targets: no terrorist act in history has approached the potential destruction caused by the carefully-targeted sabotage of a space elevator. Concern over sabotage may have an effect on location, since what would be required would be not only an equatorial site but also one outside the range of unstable territories.
A final risk of structural failure comes from the possibility of vibrational harmonics within the cable. Like the shorter and more familiar strings of stringed instruments, the cable of a space elevator has a natural resonance frequency. If the cable is excited at this frequency, for example by the travel of elevators up and down it, the vibrational energy could build up to dangerous levels and exceed the cable's tensile strength. This can be avoided by the use of intelligent damping systems within the cable, and by scheduling travel up and down the cable keeping its resonant frequency in mind. It may be possible to do damping against Earth's magnetosphere, which would additionally generate electricity that could be passed to the climbers. Oscillations can be either linear or rotational.
In the event of failure
If despite all these precautions the elevator is severed anyway, the resulting scenario depends on where exactly the break occurred.
Cut near the anchor point
If the elevator is cut at its anchor point on Earth's surface, the outward force exerted by the counterweight would cause the entire elevator to rise upward into a stable orbit. This is because a space elevator must be kept in tension, with greater centrifugal force pulling outward than gravitational force pulling inward, or any additional payload added at the elevator's bottom end would pull the entire structure down.
The ultimate altitude of the severed lower end of the cable would depend on the details of the elevator's mass distribution. In theory, the loose end might be secured and fastened down again. This would be an extremely tricky operation, however, requiring careful adjustment of the cable's center of gravity to bring the cable back down to the surface again at just the right location. It may prove to be easier to build a new system in such a situation.
Cut at about 25,000 km
If the break occurred at higher altitude, up to about 25,000 km, the lower portion of the elevator would descend to Earth and drape itself along the equator while the now unbalanced upper portion would rise to a higher orbit. Some authors have suggested that such a failure would be catastrophic, with the thousands of kilometers of falling cable creating a swath of meteoric destruction along Earth's surface, but such damage is not likely considering the relatively low density the cable as a whole would have. The risk can be further reduced by triggering some sort of destruct mechanism in the falling cable, breaking it into smaller pieces. In most cable designs, the upper portion of the cable that fell to earth would burn up in the atmosphere. Because proposed initial cables (the only ones likely to be broken) are very light and flat, the bottom portion would likely settle to Earth with less force than a sheet of paper due to air resistance on the way down.
If the break occurred at the counterweight side of the elevator the lower portion, now including the "central station" of the elevator would entirely fall down if not prevented by an early self-destruct of the cable shortly below it. Depending on the size however it would burn up on reentry anyway.
Any elevator pods on the falling section would also reenter Earth's atmosphere, but it is likely that the elevator pods will already have been designed to withstand such an event as an emergency measure anyway. It is almost inevitable that some objects - elevator pods, structural members, repair crews, etc.—will accidentally fall off the elevator at some point. Their subsequent fate would depend upon their initial altitude. Except at geosynchronous altitude, an object on a space elevator is not in a stable orbit and so its trajectory will not remain parallel to it. The object will instead enter an elliptical orbit, the characteristics of which depend on where the object was on the elevator when it was released.
If the initial height of the object falling off of the elevator is less than 23,000 m, its orbit will have an apogee at the altitude where it was released from the elevator and a perigee within Earth's atmosphere—it will intersect the atmosphere within a few hours, and not complete an entire orbit. Above this critical altitude, the perigee is above the atmosphere and the object will be able to complete a full orbit to return to the altitude it started from. By then the elevator would be somewhere else, but a spacecraft could be dispatched to retrieve the object or otherwise remove it. The lower the altitude at which the object falls off, the greater the eccentricity of its orbit.
If the object falls off at the geostationary altitude itself, it will remain nearly motionless relative to the elevator just as in conventional orbital flight. At higher altitudes the object would again wind up in an elliptical orbit, this time with a perigee at the altitude the object was released from and an apogee somewhere higher than that. The eccentricity of the orbit would increase with the altitude from which the object is released.
Above 47,000 km, however, an object that falls off of the elevator would have a velocity greater than the local escape velocity of Earth. The object would head out into interplanetary space, and if there were any people present on board it may prove impossible to rescue them.
All of these altitudes are given for an Earth-based space elevator; a space elevator serving a different planet or moon would have different critical altitudes where each of these scenarios would occur.
Van Allen Belts
The space elevator runs through the Van Allen Belts. This is not a problem for most freight, but the amount of time a climber spends in this region would cause radiation sickness to any unshielded human or other living things.
Some people speculate that passengers and other living things will continue to travel by high-speed rocket, while the space elevator hauls bulk cargo. Research into lightweight shielding and techniques for clearing out the belts is underway. An elevator could carry passenger cars with heavy lead or other shielding, however for the thin cable of an initial elevator that would reduce its overall capacity; this becomes less of a problem later, when the cable has been thickened.
However, the shielding itself can in some cases consist of useful payload- for example food, water, supplies, fuel or construction/maintenance materials, and no additional shielding costs are then incurred on the way up.
More conventional and faster reentry techniques such as aerobraking might be employed on the way down to minimize radiation exposure. Deorbit burns use relatively little fuel, and so can be cheap.
Main article: space elevator economics
With a space elevator, materials could be sent into orbit at a fraction of the current cost. Modern rocketry gives prices that are on the order of thousands of U.S. dollars per kilogram for transfer to low earth orbit, and roughly 20 thousand dollars per kilogram for transfer to geosynchronous orbit. For a space elevator, the price could be on the order of a few hundreds of dollars per kilogram.
Space elevators have high capital cost but low operating expenses, so they make the most economic sense in a situation where it would be used over a long period of time to handle very large amounts of payload. The current launch market may not be large enough to make a compelling case for a space elevator, but a dramatic drop in the price of launching material to orbit would likely result in new types of space activities becoming economically feasible. In this regard they share similarities with other transportation infrastructure projects such as highways or railroads.
Development costs might be roughly equivalent, in modern dollars, to the cost of developing the shuttle system. A question subject to speculation is whether a space elevator would return the investment, or if it would be more beneficial to instead spend the money on developing rocketry further.
One potential problem with a space elevator would be the issue of ownership and control. Such an elevator would require significant investment (estimates start at about $5 billion for a very primitive tether), and it could take at least a decade to recoup such expenses. At present, only governments are able to spend that sort of money in the space industry.
Assuming a multi-national governmental effort was able to produce a working space elevator, many delicate political issues would remain to be solved. Which countries would use the elevator and how often? Who would be responsible for its defense from terrorists or enemy states? A space elevator would allow for easy deployment of satellites into orbit, and it is becoming ever more obvious that space is a significant military resource. A space elevator could potentially cause numerous rifts between states over the military applications of the elevator. Furthermore, establishment of a space elevator would require knowledge of the positions and paths of all existing satellites in Earth orbit and their removal if they cannot adequately avoid the elevator.
The U.S. military may covertly oppose a space elevator. By granting inexpensive access to space, a space elevator permits less-wealthy opponents of the U.S. to gain military access to space—or to challenge U.S. control of space. An important U.S. military doctrine is to maintain space and air superiority during a conflict. In the current political climate, concerns over terrorism and homeland security could be possible grounds for more overt opposition to such a project by the U.S. government.
An initial elevator could be used in relatively short order to lift the materials to build more such elevators, but whether this is done and in what fashion the resulting additional elevators are utilized depends on whether the owners of the first elevator are willing to give up any monopoly they may have gained on space access. However, once the technologies are in place, any country with the appropriate resources would most likely be able to create their own elevator.
As space elevators (regardless of the design) are inherently fragile but militarily valuable structures, they would likely be targeted immediately in any major conflict with a state that controls one. Consequently, most militaries would elect to continue development of conventional rockets (or other similar launch technologies) to provide effective backup methods to access space.
The cost of the space elevator is not excessive compared to other projects and it is conceivable that several countries or an international consortium could pursue the space elevator. Indeed, there are companies and agencies in a number of countries that have expressed interest in the concept. Generally, megaprojects need to be either joint public-private partnership ventures or government ventures and they also need multiple partners. It is also possible that a private entity (risks notwithstanding) could provide the financing — several large investment firms have stated interest in construction of the space elevator as a private endeavor. However, from a political standpoint there is a case to be made that the space elevator should be an international effort like the International Space Station with the inevitable rules for use and access.
The political motivation for a collaborative effort comes from the potential destabilizing nature of the space elevator. The space elevator clearly has military applications, but more critically it would give a strong economic advantage for the controlling entity. Information flowing through satellites, future energy from space, planets full of real estate and associated minerals, and basic military advantage could all potentially be controlled by the entity that controls access to space through the space elevator. An international collaboration could result in multiple ribbons at various locations around the globe, since subsequent ribbons would be significantly cheaper, thus allowing general access to space and consequently eliminating any instabilities a single system might cause. The Epilogue of Arthur C. Clarke's Fountains of Paradise shows an Earth with several space elevators leading to a giant, “circumterran”, space station. The analogy with a wheel is evident: the space station itself is the wheel rim , Earth is the axle, and the six equidistant space elevators the spokes.
While there may be few ordinary citizens who might profit from space elevator applications unlike space agencies, commercial companies and the scientific community, it is highly likely that the general public will ultimately benefit from it through cheap solar power and a greener environment, enhanced satellite navigation and communication services, reduced risk from nuclear waste, and even through improved health, education and social services made possible because of the savings made by governments in accessing space.
The concept of the space elevator first appeared in 1895 when a Russian scientist Konstantin Tsiolkovsky was inspired by the Eiffel Tower in Paris to consider a tower that reached all the way into space. He imagined placing a "celestial castle" at the end of a spindle-shaped cable, with the "castle" orbiting Earth in a geosynchronous orbit (i.e. the castle would remain over the same spot on Earth's surface). The tower would be built from the ground up to an altitude of 35,800 kilometers (geostationary orbit). Comments from Nikola Tesla suggest that he may have also conceived such a tower. Tsiolkovsky's notes were sent behind the Iron Curtain after his death.
Tsiolkovsky's tower would be able to launch objects into orbit without a rocket. Since the elevator would attain orbital velocity as it rode up the cable, an object released at the tower's top would also have the orbital velocity necessary to remain in geosynchronous orbit.
Building from the ground up, however, proved an impossible task; there was no material in existence with enough compressive strength to support its own weight under such conditions. It took until 1957 for another Russian scientist, Yuri N. Artsutanov, to conceive of a more feasible scheme for building a space tower. Artsutanov suggested using a geosynchronous satellite as the base from which to construct the tower. By using a counterweight, a cable would be lowered from geosynchronous orbit to the surface of Earth while the counterweight was extended from the satellite away from Earth, keeping the center of gravity of the cable motionless relative to Earth. Artsutanov published his idea in the Sunday supplement of Komsomolskaya Pravda in 1960. He also proposed tapering the cable thickness so that the tension in the cable was constant—this gives a thin cable at ground level, thickening up towards GEO.
Making a cable over 35,000 kilometers long is a difficult task. In 1966, four American engineers decided to determine what type of material would be required to build a space elevator, assuming it would be a straight cable with no variations in its cross section. They found that the strength required would be twice that of any existing material including graphite, quartz, and diamond.
In 1975 an American scientist, Jerome Pearson , designed a tapered cross section that would be better suited to building the tower. The completed cable would be thickest at the geosynchronous orbit, where the tension was greatest, and would be narrowest at the tips to reduce the amount of weight that the middle would have to bear. He suggested using a counterweight that would be slowly extended out to 144,000 kilometers (almost half the distance to the Moon) as the lower section of the tower was built. Without a large counterweight, the upper portion of the tower would have to be longer than the lower due to the way gravitational and centrifugal forces change with distance from Earth. His analysis included disturbances such as the gravitation of the Moon, wind and moving payloads up and down the cable. The weight of the material needed to build the tower would have required thousands of Space Shuttle trips, although part of the material could be transported up the tower when a minimum strength strand reached the ground or be manufactured in space from asteroidal or lunar ore.
Arthur C. Clarke introduced the concept of a space elevator to a broader audience in his 1978 novel, The Fountains of Paradise, in which engineers construct a space elevator on top of a mountain peak (Adam's Peak in Sri Lanka) in the equatorial island of Taprobane (the Discoveries era name for Sri Lanka) .
David Smitherman of NASA/Marshall's Advanced Projects Office has compiled plans for such an elevator that could turn science fiction into reality. His publication, "Space Elevators: An Advanced Earth-Space Infrastructure for the New Millennium" , is based on findings from a space infrastructure conference held at the Marshall Space Flight Center in 1999.
Another American scientist, Bradley Edwards , suggests creating a 100,000 km long paper-thin ribbon, which would stand a greater chance of surviving impacts by meteors. The work of Edwards has expanded to cover: the deployment scenario, climber design, power delivery system, orbital debris avoidance, anchor system, surviving atomic oxygen, avoiding lightning and hurricanes by locating the anchor in the western equatorial pacific, construction costs, construction schedule, and environmental hazards. Plans are currently being made to complete engineering developments, material development and begin construction of the first elevator. Funding to date has been through a grant from NASA Institute for Advanced Concepts. Future funding is sought through NASA, the United States Department of Defense, private, and public sources. The largest holdup to Edwards' proposed design is the technological limits of the tether material. His calculations call for a fiber composed of epoxy-bonded carbon nanotubes with a minimal tensile strength of 130 GPa; however, tests in 2000 of individual single-walled carbon nanotubes (SWCNTs), which should be notably stronger than an epoxy-bonded rope, indicated the strongest measured as 63 GPa .
Space elevator proponents are planning competitions for space elevator technologies , similar to the Ansari X Prize. Elevator:2010 will organize annual competitions for climbers, ribbons and power-beaming systems. The Robolympics Space Elevator Ribbon Climbing organizes climber-robot building competitions.
Note: Some depictions were made before the space elevator concept became known.
- 2300 AD, role-playing game by Game Designers' Workshop
- 3001: The Final Odyssey, novel by Arthur C. Clarke
- Assassin Gambit , novel by William Forstchen
- Friday, novel by Robert A. Heinlein
- Gunnm, manga by Yukito Kishiro
- Halo 2's New Mombasa (in equatorial East Africa, AD 2552) contains a space elevator in its skyline
- Hothouse , novel by Brian Aldiss
- Jack and the Beanstalk, fairy tale
- Jovian Chronicles, role playing game by Dream Pod 9 which includes a Martian space elevator
- Jumping Off the Planet , novel by David Gerrold
- Kurau : Phantom Memory, anime series.
- Great Escape which is the 5th episode of Starship Operators , a 2004 japanese anime series, briefly depicts an orbital elevator ride on a fictitious planet
- Mystery Science Theater 3000, television series "The Umbilicus"
- Sid Meier's Alpha Centauri, strategy computer game
- Civilization: Call to Power, strategy computer game
- The Adventure Company's The Moment Of Silence , adventure computer game lets you ride a space elevator based in New York City
- Star Trek: Voyager season 3 episode 19 "Rise"
- Strata, one of Terry Pratchett's two solely science fiction novels
- The End of the Empire , novel by Alexis A. Gilliland
- The Fountains of Paradise, novel by Arthur C. Clarke
- The Gothic Empire, book of Nemesis the Warlock comic strip by Pat Mills
- The Mars trilogy of novels by Kim Stanley Robinson
- The Web Between the Worlds , novel by Charles Sheffield
- Zavtra Nastupit Vechnost , novel by Russian sci-fi writer Alexander Gromov
- Rainbow Mars , novel by Larry Niven with a beanstalk on Mars and Earth
- Space elevator economics discusses capital and maintenance costs of a space elevator.
- Lunar space elevator for the (far) more easily built moon variant
- A space elevator is a type of skyhook.
- Skyhooks are a type of tether propulsion.
- Tether propulsion is a type of spacecraft propulsion.
- The space elevator passes through the Van Allen radiation belt.
- The space elevator is a geosynchronous satellite. That means it is in geostationary orbit.
- Another type of space elevator that does not rely on materials with high tensile strength for support is the space fountain, a tower supported by interacting with a high-velocity stream of magnetic particles accelerated up and down through it by mass drivers. Since a space fountain is not in orbit, a space fountain can be of any height and placed at any latitude. Unlike space elevators, space fountains require a continuous supply of power to remain aloft.
- Edwards BC, Westling EA. The Space Elevator: A Revolutionary Earth-to-Space Transportation System. San Francisco, USA: Spageo Inc.; 2002. ISBN 0972604502.
- Space Elevators - An Advanced Earth-Space Infrastructure for the New Millennium [PDF]. A conference publication based on findings from the Advanced Space Infrastructure Workshop on Geostationary Orbiting Tether "Space Elevator" Concepts, held in 1999 at the NASA Marshall Space Flight Center, Huntsville, Alabama. Compiled by D.V. Smitherman, Jr., published August 2000.
- "The Political Economy of Very Large Space Projects" HTML PDF, John Hickman, Ph. D. Journal of Evolution and Technology Vol. 4 - November 1999.
- The Space Elevator NIAC report by Dr. Bradley C. Edwards
- Ziemelis K. "Going up". In New Scientist 2001-05-05, no.2289, p.24-27. Republished in SpaceRef. Title page: "The great space elevator: the dream machine that will turn us all into astronauts."
- The Space Elevator Comes Closer to Reality. An overview by Leonard David of space.com, published 27 March 2002.
- Krishnaswamy, Sridhar. Stress Analysis — The Orbital Tower (PDF)
- LiftPort Group - The Space Elevator Companies founded by Michael Laine
- The National Space Society Chapter for Space Elevator (NSECC)
- Elevator:2010 Space elevator prize competitions
- Space elevator, Institute for Scientific Research
- The Space Elevator: 3rd Annual International Conference June 28-30, 2004 in Washington, D.C.
- 3rd Annual International Conference Presentations
- 4th Annual International Conference Presentations
- LiftWatch.org - Space Elevator News
- View space elevator animation Windows Media Video (WMV) file - Institute for Scientific Research
- Download space elevator animation Windows Media Video (WMV) file - Institute for Scientific Research
- Brief video (realmedia format) of the space elevator concept
- Liftport Forums - Space Elevator Discussion Forum
- The Space Elevator Reference
- Space Elevator Yahoo Group A discussion list for space elevator related topics
- A major Russian site about space elevators, by Y. Artsutanov and D. Tarabanov
- To the Moon in a Space Elevator? (February 4, 2003 Wired News)
- Liftoff (teenage education): Space Towers
- Audacious & Outrageous: Space Elevators
- Various thoughts on space elevators posted by Blaise Gassend
- There have been accusations of corruption in NIAC's award process see: and .
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Space_elevator | 13 |
53 | Last updated 11/8/07
Calculating Mechanical Advantage
Mechanical advantage is the result of using simple machines to reduce the force required to move a load, or to increase the distance a load is moved. It can be thought of as "gain" where a smaller force is converted to a larger force by increasing the distance it is acted over, or a smaller distance is converted to a larger distance by increasing the force that is applied over the shorter distance.
Moving a load from one place to another requires a fixed amount of work. Work is defined as Force times distance moved.
W = F x D
In this equation we see that as distance increases, force decreases. Typically when machines are used to reduce the force required to move a load, it is accomplished by increasing the distance. For example, to elevate a load from one elevation to another with the minimum distance traveled you must lift it vertically. This straight vertical lift requires the maximum force as the distance is minimized. Using an inclined plane to change the elevation increases the distance the load moves to reach the desired elevation, thus reducing the force required. This reduction of force is referred to as mechanical advantage. We can think of it like this where A = the mechanical advantage:
W = (1/A)F x AD
We have basically multiplied the right side of the equation by A/A which equals one: Force has decreased and distance has increased, but they are scaled to each other so work is preserved.
If we use no mechanical advantage the force is not reduced (A=1), so we call this a mechanical advantage of 1, or "unity gain". (This is important to note that "no mechanical advantage" is not where A=0, it's where A=1 at unity gain where force in equals force out) If we double the distance thereby reducing the force by half (A=2), we call this a mechanical advantage of 2.
Here are specific procedures to calculate your mechanical advantage:
LeverA lever can be used to reduce force to move a load a particular distance or to move a load a larger distance for the particular force. This can be measured in several ways, all of which are ultimately a function of the ratio of lever arm segment lengths (the equatios for this depend on the type of lever you're using). Typically the mechanical advantage is calculated as a ratio of forces, where A = mechanical advantage:
A = (force of load) / (force of effort)
It can also be measured as a ratio of displacements:
A = (displacement of load) / (displacement of effort)
With the inclined plane you are doing two things. You're increasing the distance the load travels to do the same work (ignoring friction). You're also dividing the force on the object into vertical and horizontal components. We're going to exhibit a horizontal force on an object that is moving diagonally upwards, thereby creating a force vector. A force vector has both a horizontal and vertical component. We will use the following formula, where A = mechanical advantage:
A = (length of inclined surface) / (height of incline)
Note that a straight vertical lift makes this fraction = 1 which denotes a 1-to-1 relationship between force on object and vertical load, therefore where there's "no mechanical advantage" then the result is 1. Keep in mind that the length of inclined surface is the hypotenuse of the triangle created (the length of the actual surface the load travels on), it is not the length of the base of the triangle.
In order for a pulley system to reduce the force required to lift a load (or in a horizontal example perhaps to compress or expand a spring), the length of pull on the string must be increased in order to reduce the force required to pull it. With pulleys this can be accomplished by multiple passes through multiple-sheaved pulleys. This will increase the length of pull required to move the load a fixed distance. Therefore, the simplest way to determine mechanical advantage is in a ratio of length of pull to the displacement of the load:
A = (length of pull) / (displacement of load)
If your load needs to move 10 cm and you set up your pulley system so that it pulls 30 cm of string to move the load 10 cm, then you have a mechanical advantage of 3. If you simply use a single pulley where the length of pull is equal to the displacement of the load, you have a mechaical advantage of one, which is considered no advantage.
This can also be measured as a ratio of the force of pull on the string and the force exerted on the object, but in all likelihood this is going to be impossible to measure in your machine on event day, so be sure we can determine your mechanical advantage using the pull length/displacement formula.
For the 2007 competition the rules essentially require a mechanical advantage greater than 1 for the lighter can to lift the heavier can. To calculate the minimum mechanical advantage for this particular case, use the labeled values on the cans for the weight, then simply make a ratio of the load to effort:
m = (load) / (effort) = (weight of lifted can) / (weight of lifting can)
Good luck and Have Fun! | http://www.ee.nmt.edu/~tubesing/missionpossible/advantage.htm | 13 |
60 | The polynomial which can be expressed in the form of ax2 + bx + c = 0, then we say that the equation is in the form of quadratic polynomial. Here we say that a, b, c are the real Numbers and we must remember that a <> 0, since if we have a = 0, the equation will convert into a linear equation in place of the quadratic equation. If we say that alpha (α) and beta (β) are the two roots of the Quadratic Equation and their sum of the root i.e. α + β is written as = - coefficient of ‘x’ / coefficient of x2.
Also the products of the roots is written as α * β = c/a,
Now in case the roots of the equation are known, then we can form the quadratic equation using the following formula:
x2 – sum of roots * x + product of roots = 0
We can find the solution of the quadratic equations by Factorization, by completing the squares and making them the perfect squares and it is also done even by the quadratic formula.
Once we learn to use the formula of the quadratic roots and to find the value of the determinants, and the nature of roots can also be known.
Now let us see that α and β are the roots of the quadratic equation, then it means that if we put the value of α or β in the given equation, then it satisfies the given equation.
D = b2 – 4 * a * c is the formula which helps to analyze the types of roots of the equation. If D = 0, then roots are real and equal, if D> 0, then roots are unequal and real, if D< 0, then roots are imaginary.
An equation whose highest degree is equals to 2 is known as Quadratic Equation. In other words an equation whose highest power is a Square is said to be quadratic equation. Quadratic equation can be written as: ax2 + bx + c = 0 and Quadratic Formula is given by:
⇨ x = - b + √ (b2 – 4ac) / 2a, its alternate form also given by:
⇨ x = 2c / -b +√ (b2 – 4ac).
Now w...Read More
Square root property can be explained as a Square root which is the mathematical reverse of a squared exponent. It also says that squaring of a positive or a negative number will be equals to a positive number. Square root property states that if a² = b then a = √b or a = -√b, which can also be written as, a = ±√b. Here we will have two values possibly. Both values ...Read More
We are very much aware of formula for Square of addition or subtraction of two Numbers which can be given as:
(a + b) 2 = a2 + b2 + 2ab and
(a - b) 2 = a2 + b2 - 2ab,
When you have a Quadratic Equation of form ax² + bx + c which is not possible to be factorized, you can use technique called completing the square. To complete the square means creating a polyno...Read More
Quadratic equations means an equation which is written in form of ax2 + bx + c = 0. In real life we come across many practical problems which are related to applications of our life. If we know area of R...Read More
We can Define Quadratic Equations as polynomial equations in which highest power of variable is 2, thats why it is also called second degree equation. General form of Quadratic Equation is ax2 + bx + c = 0 , here 'x' is a variable and a, b, c are constants they are also called as quadratic coefficient, and highest power is two, in this equation there is a condi...Read More | http://www.tutorcircle.com/quadratic-equations-t7qCp.html | 13 |
274 | Topics covered: The main difference between arc-length and either area or volume; the limit definition of arc-length; approximating errors and their magnitude when we use infinite sums.
Instructor/speaker: Prof. Herbert Gross
This section contains documents that are inaccessible to screen reader software. A "#" symbol is used to denote such documents.
Part III & IV Study Guide (PDF - 23MB)#
Supplementary Notes (PDF - 46MB)#
Blackboard Photos (PDF - 8MB)#
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. Having already studied area and volume and its relationship to calculus, today, we turn our attention to the study of length. And this may seem a bit strange because, intuitively, I think it's fair to assume that you would imagine that length would be simpler than area which, in turn, would be simpler than volume and, hence, that perhaps we should have started with length in the first place. The interesting thing is in terms of our structure, which we so far have called two-dimensional area, three-dimensional area, and which, today, we shall call one-dimensional area, a rather peculiar thing that causes a great deal of difficulty, intellectually speaking, occurs in the study of arc length that does not occur in either the study of area or volume. And I think that we'll start our investigation today leading up to what this really means.
So, as I say, I call today's lesson 'One-dimensional Area', which is arc length. And let's show that there is a parallel, at least in part, between the structure of arc length and the structure of area and volume. You may recall that for area, our initial axiom was that the building block of area was a rectangle. And for volumes, the building block we saw was a cylinder. For arc length—I think it's fairly obvious to guess what we're going to say—the basic building block is a straight line segment.
And so without further ado, that becomes our first rule, our first axiom, axiom number one. We assume that we can measure the length of any straight line segment. That's our building block.
The second axiom that we assume is that the length of the whole equals the sum of the lengths of the parts. In other words, if an arc is broken down into constituent bases, the total arc length is equal to the sum of the arc lengths of the constituent parts. And at this stage, we can say, so far, so good. This still looks like it's going to be the same as area or volume.
But now remember what one of the axioms for both area and volume were, namely, what? That if region 'R' was contained in region 'S', the area or the volume of 'R' was no greater than that of the area or volume of 'S'. However, for arc length, this is not true. It need not be true. I shouldn't say it's not true.
It need not be true that if region 'R' is contained in 'S' that the perimeter of region 'R' is less than or equal to the perimeter of 'S'. In fact, this little diagram that I've drawn over here, I hope will show you what I'm driving at. Notice that it's rather clear that the region 'R' here, which is shaded, is contained inside the region 'S', which is my rectangle. And yet, if you look at the perimeter here, all these finger-shaped things in here, I think it's easy to see that the perimeter of 'R' exceeds the perimeter of 'S'.
And if it's not that easy to see, heck, just make a few more loops inside here and keep wiggling this thing around until you're convinced that you have created this particular situation. All I want you to see here is that it's plausible to you that we cannot talk about lengths by squeezing them, as we did areas and volumes, between regions that we already knew contained the given region and were contained in the given region.
Now let me just pause here for one moment to make sure that we keep one thing straight. We're talking now about an analytical approach to length. In other words, an approach that will allow us to bring to bear all of the power of calculus to the study. I don't want you to forget for a moment that intuitively, we certainly do know what arc length is, just as we intuitively had a feeling for what area and volume were.
Just to freshen our memories on this, remember the intuitive approach. That if you have an arc from 'A' to 'B', the typical way of measuring the arc length is to take, for example, a piece of string, lay it off along the curve from 'A' to 'B'. After you've done this, pick the string up. And then straighten the string out, whatever that means, and measure its length with a ruler.
And we won't worry about how you know whether you're stretching the string too taut or what have you. We'll leave out these philosophic questions. All we'll say is we would like a more objective method that will allow us to use mathematical analysis. And so what we're going to try to do next is to find an analytic way that will allow us to use calculus, but at the same time will give us a definition which agrees with our intuition.
And the first question is how shall we begin. And as so often is the case in mathematics, we begin our new quest by going back to an old way that worked for a previous case. And hopefully, we'll find a way of extending the old situation to cover the new.
Now what does this mean in this particular instance? Well, let me just call it this. I'll call it analytical approach, trial number one. What I'm going to do is try to imitate exactly what we did in the area case. For example, if I take the region 'R', which I'll draw this way here, if this is the region 'R', namely bounded above by the curve 'y' equals 'f of x', below by the x-axis, on the left, by the line 'x' equals 'a', and on the right, by the line 'x' equals 'b', how did we find the area of the region 'R'? Well, what we did is we inscribed and we circumscribed rectangles. And we took the limit of the circumscribed rectangles, et cetera, and put the squeeze on as 'n' went to infinity.
Now the idea is we might get the idea that maybe we should do the same thing for arc length. In other words, let me call one of these little pieces of arc length 'delta w'. In other words, I'm just isolating part of the diagram here. Here's 'delta w'. Here's 'delta x'. Here's 'delta y'.
The idea is in the same way that I approximated a piece of area by an inscribed and a circumscribed rectangle, why can't I say something like, well, let me let 'delta w' be approximately equal to 'delta x'? And just to make sure that our memories are refreshed over here, notice that 'delta x' is just the length of each piece if the segment from 'a' to 'b', namely of length 'b - a', is divided into 'n' equal parts. See, the idea is why can't we mimic the same approach.
And let me point out what is so crucial here in terms of what I mentioned above, namely, notice that the reason that we can say that the area of the region 'R' is just the limit of 'U sub n' as 'n' approaches infinity, where 'U sub n' is the area of the circumscribed rectangles. The only reason we can say that is because we squeezed 'A sub r' between 'L sub n', the inscribed rectangles, and 'U sub n', the circumscribed rectangles. And the limits of these lower bounds and upper bounds were equal. 'A sub 'r was squeezed between these two. Hence, it had to equal the common limit.
That was the structure that we used. On the other hand, we can't use that when we're dealing with arc length. And I'll mention that in a few moments again. But let me just point out what I'm driving at this way.
Suppose we mimic this as we did before. And we say, OK, let the element of arc length, 'delta w', be approximately equal to 'delta x'. And now what I will do is define script 'L' from 'a' to 'b'. I don't want to call it arc length because it may not be.
But as a first approximation, let me define this symbol to be the limit of the sum of all these 'delta x's when we divide this region into 'n' parts as 'n' goes to infinity. Now look, I have the right to make up this particular definition. Now if I compute this limit, what happens? Recall that we mentioned that 'delta x' was 'b - a' over 'n'.
Consequently, if I have 'n' of these pieces, the total sum would be what? 'n' times b' - a' over 'n'. And 'n' times 'b - a' over 'n' is just 'b - a'. In other words, script 'L' from 'a' to 'b' is defined. And it's 'b - a', not 'w'.
In other words, coming back to our diagram, notice what happened. What we wanted was a recipe that would give us this length here. What we found was a recipe that gave us the length from 'a' to 'b'. Now intuitively, we know that the length from 'a' to 'b' is not the arc length that we're looking for.
In other words, what we defined to be script 'L' existed as a limit, but it gave us an answer which did not coincide with our intuition. Since we intuitively know what the right answer is, we must discard this approach in the sense that it doesn't give us an answer that we have any faith in.
And by the way, notice where we went wrong over here if you want to look at it from that point of view. Notice that when we approximated 'delta w' by 'delta x', it's clear from this diagram that 'delta x' was certainly less than 'delta w'. But notice that we didn't have an upper bound here.
Or we can make speculations like, maybe 'delta x + delta y' would be more than 'delta w', and things like this. We'll talk about that more later. But for now, all I want us to see is the degree of sophistication that enters into the arc length problem that didn't bother us in either the area or the volume problems, namely, we are missing now the all-important squeeze element. Well, no sense crying over spilt milk. We go on, and we try the next type of approach.
In other words, what we sense now is why don't we do this. Instead of approximating 'delta w' by 'delta x', why don't we approximate 'delta w' by the cord that joins the two endpoints of the arc. In other words, I think that we began to suspect intuitively that, somehow or other, for a small change in 'delta x', 'delta s' should be a better approximation to 'delta w' than 'delta x' was.
Of course, the wide open question is granted that it's better, is it good enough. Well, we'll worry about that in a little more detail later. All we're saying is let 'delta w' be approximately equal to 'delta s'.
In other words, we'll approximate 'delta w' by 'delta s'. And we'll now define 'L' from 'a' to 'b', 'L' from 'a' to 'b' to be the limit not of the sum of 'delta x's now, but the sum of the 'delta s's, as 'k' goes from 1 to 'n', taken in the limit as 'n' goes to infinity.
And for those of us who are more familiar with 'delta x's and 'delta y's, and the symbol delta s bothers us, simply observe that by the Pythagorean theorem, 'delta s' is related to 'delta x' and 'delta y' by ''delta s' squared' equals ''delta x' squared' plus ''delta y' squared'. So we can rewrite this in this particular form.
In other words, I will define capital 'L' hopefully to stand for length later on. But we'll worry about that later too. But 'L' from 'a' to 'b' to be this particular limit.
And now I claim that there are three natural questions with which we must come to grips. The first question is does this limit even exist. Does this limit exist? And the answer is that, except for far-fetched curves, it does. You really have to get a curve that wiggles uncontrollably to break the possibility of this limit existing. Unfortunately, there are pathological cases, one of which is described in the text assignment for this lesson, of a curve that doesn't have a finite limit when you try to compute the arc length this way. Just a little idiosyncrasy.
However, for any curve that comes up in real life, that doesn't oscillate too violently with infinite variations, et cetera, et cetera, which we won't, again, talk about right now, the idea is that this limit does exist. As far as this course is concerned, we shall assume the answer to question one is yes. In fact, the way we'll do it without being dictatorial is we'll say, look, if this limit doesn't exist, we just won't study that curve. In fact, we will call a curve rectifiable if this limit exists. And so we'll assume that we deal only with rectifiable curves, in other words, that this limit does exist.
Question number two. OK, the limit exists. So how do we compute it? And that, in general, is not a very easy thing to answer. What's even worse though is that after you've answered this, you have to come to grips with a question that we were able to dodge when we studied both area and volume, namely, the question is once this limit does exist and you compute it, how do you know that it agrees with our intuitive definition of arc length.
In other words, if you recall what we did just a few minutes ago, we defined script 'L' from 'a' to 'b' to be a certain limit. We showed that that limit existed. The problem was is that limit, even though it existed, did not give us an answer that agreed intuitively with what we believed arc length was supposed to mean. In other words, you see, we've assumed the answer to the first question is yes.
Now we have two questions to answer. How do you compute this limit, which is a hard question in it's own right? Secondly, once you do compute this limit, how do you know that it's going to agree with the intuitive answer that you get for arc length? And this shall be what we have to answer in the remainder of our lesson today.
Let's take these in order. And let's try to answer question number two first. The idea is we've defined capital 'L' from 'a' to 'b' to be this particular limit, and we'd like to know if this limit exists. Not only that, but we have a great command of calculus at our disposal now. All of the previous lessons can be brought to bear here to help us put this into the perspective of what calculus is all about.
For example, when I see an expression like this, I like to think in terms of a derivative. A derivative reminds me of 'delta y' divided by 'delta x', et cetera. So what I do here is I factor out a ''delta x' squared'. In other words, I divide through by ''delta x' squared' inside the radical sign, which is really the same equivalently as dividing by 'delta x'. And I multiply by 'delta x' outside.
In other words, factoring out with ''delta x' squared', the square root of ''delta x' squared' plus ''delta y' squared' can be written as the square root of '1 + ''delta y' over 'delta x'' squared' times 'delta x'. Now the idea is that 'delta y' over 'delta x' is the slope of that cord that joins the two endpoints of 'delta w'. It's not a derivative as we know it. It's the slope of a straight line cord, not the slope of a curve.
Now the whole idea is this. We know from the mean value theorem that if our curve is smooth, there is a point in the interval at which the derivative at that point is equal to the slope of the cord. In other words, if 'f' is differentiable on [a, b], we may invoke the Mean Value Theorem—here abbreviated as MVT, the Mean Value Theorem—to conclude that there is some point 'c sub k' in our 'delta x' interval for which ''delta y' over 'delta x'' is 'f prime of 'c sub k''.
And in order to help you facilitate what we're talking about in your minds, look at the following diagram. This is all we're saying. What we're saying is here's our 'delta x', here's our 'delta y'. We'll call this point 'x sub 'k - 1'', this point 'x sub k'. This is our k-th partition. 'Delta y' divided by 'delta x' is just the slope of this line. See, that's just the slope of this line.
And what the Mean Value Theorem says is if this curve is smooth, some place on this arc, there is a point where the line tangent to the curve is parallel to this cord. And that's what I'm calling the point 'c sub k'. 'c sub k' is the point at which the slope of the curve is equal to the slope of the cord.
In other words, if 'f' is continuous, I can conclude that 'L' from 'a' to 'b' is the limit as 'n' approaches infinity, summation 'k' goes from 1 to 'n', square root of '1 + ''f prime 'c sub k'' squared' times 'delta x'. And notice that this now starts to look like my definite integral according to the definition that we were talking about in our earlier lectures in this block. In fact, how can we invoke the first fundamental theorem of integral calculus? Remember, if this expression here—it's not an integral yet—happens to be a continuous function, then we're in pretty good shape.
In other words, if I can assume that 'f prime' is continuous—let's go over here and continue on here. See, what I'm saying is if I can assume that 'f prime' is continuous, well, look, the square of a continuous function is continuous. The sum of two continuous functions is continuous. And the square root of a continuous function is continuous.
In other words, and this is a key point, if the derivative is continuous, I can conclude that the 'L' from 'a' to 'b' can be replaced by the definite integral from 'a' to 'b' square root of '1 + ''dy/dx' squared'' times 'dx', which I quickly point out may be hard to evaluate. In other words, one thing I could try to do over here is to find the function g whose derivative with respect to 'x' is the square root of '1 + ''dy/dx' squared'' and evaluate that between 'a' and 'b'. I can put approximations on here, whatever I want.
In fact, let's summarize it down here. If 'f' is differentiable on the closed interval from 'a' to 'b' and if 'f prime' is the derivative—you see, 'f prime'—is also continuous on the closed interval from 'a' to 'b', then not only does capital 'L' from 'a' to 'b' exist, but it's given computationally by this particular integral. And that answers question number two, that the limit exists, and this is what it's equal to.
The problem that we're faced with—and I've written this out. I think it looks harder than what it says. But I've taken the trouble to write this whole thing out, so that if you have trouble following what I'm saying, that you can see this thing blocked out for you. The idea is this.
What we have done is we have approximated 'delta w' by 'delta s'. Then what we said is 'w' is the sum of all these 'delta w's. And since each 'delta w' is approximately 'delta s', then what we can be sure of is that 'w' is approximated by this sum over here.
Now here's what we did. We didn't work with 'w' at all after this. We turned our attention to this. This is what we did in our case here. And we showed that this limit existed. We showed that the limit, as 'k' went from 1 to 'n' and then went to infinity of these pieces here, was 'L' of 'ab'. And that existed. What we did not show is that this limit was w itself.
Intuitively, you might say, if I put the squeeze on this, doesn't this get rid of all the error for me? We haven't shown that we've gotten rid of all the error. In essence, how do we know if all the error has been squeezed out? This is precisely what question three is all about.
Again, going back to what we did earlier, remember, when we approximated 'delta w' by 'delta x', then we said, OK, add up all these 'delta x's, and take the limit as 'n' goes to infinity. We found that that limit was 'b - a', which was not the length of the curve.
In other words, somehow or other, even though the limit existed, we did not squeeze out all the error. And this is why the study of arc length is so difficult. Because we don't have a sandwiching effect. It is very difficult for us to figure out when we've squeezed out all the error.
So at any rate, let me generalize question number three. Remember what question number three is? How do we know that if the limit exists, it's equal to 'w'? All I'm saying is don't even worry about arc length. Just suppose that 'w' is any function defined on a closed interval from 'a' to 'b' and that we've approximated 'delta w' by something of the form 'g of 'c sub k'' times 'delta x', where 'g' is what I call some intuitive function defined on [a, b].
For example, in our earlier example, we started with 'delta w' being arc length. And we approximated 'delta w' by 'delta x' in which case 'g' would've been the function which is identically 1. In the area situation, remember we approximated 'delta A' by something times 'delta x'. Well, what times 'delta x'? Well, it was the height of a rectangle.
In other words, we look at the thing we're trying to find, we use our intuition—and this is difficult because intuition varies from person to person—and we say, what would make a good approximation here. What would be an approximation? We say, OK, let's approximate 'delta w' by 'g of 'c sub k'' times 'delta x', where 'c' is some point in the interval, et cetera.
Then we add up all of these 'delta w's as 'k' goes from 1 to 'n'. We say, OK, that's approximately this thing over here. Now what we have shown is that if 'g' is continuous on [a, b] then as 'n' goes to infinity, this particular limit exists and is denoted by the integral from 'a' to 'b', ''g of x' dx'. This is what we've shown so far.
What the big question is is, granted that this limit exists, does it equal 'w'? In other words, is 'w' equal to the integral from 'a' to b', ''g of x' dx'? That's what the remainder of today's lesson is about as far as arc length is concerned. And I'm going to solve this problem in general first and then make some applications about this to arc length itself.
And by the way, what we're going to see next is you may remember that very, very early in our course, we came to grips with something called infinitesimals. We came to grips with this delta y tan infinitesimals of higher order. And now we're going to see how just as this came up in differential calculus, these same problems of approximation come up in integral calculus. The only difference, as we've mentioned before, is instead of having to come to grips with the indeterminate form 0/0, we're going to have to come to grips with the indeterminate form infinity times 0. Let me show you what I mean by that.
The idea is this. Let's suppose that our case 'delta w'—we've broken up 'w' now into increments—and let's suppose that we're approximating 'delta w', as we said before, by 'g of 'c sub k'' times 'delta x'. Well, what do we mean by we're approximating this? What we mean is there's some error in here.
Let's call the error 'alpha sub k' times 'delta x'. In other words, this is just a correction factor. This is what we have to add on to this to make this equality whole. Once I add on the error, I'm no longer working with an inequality. I'm working with an equality. And that allows me to use some theorems.
What I can say now is by definition, w is the sum of all these 'delta w's. But 'delta w' being a sum, we can use theorems about the sigma notation. In other words, what is the sum of all these 'delta w's? It's the sum of all of these pieces plus the sum of all of these pieces, which I've written over here.
And now you see, if I transpose, I get that 'w' minus this sum is equal to the 'sum k' goes from 1 to 'n', 'alpha k' times 'delta x'. Now the next thing I do is take the limit as 'n' goes to infinity. By definition, since 'g' is a continuous function, this limit here is just the definite integral from 'a' to 'b', ''g of x' dx'. On the other hand, this limit here is what we have to investigate.
In other words, we would like to know whether 'w' is equal to the definite integral or not. If we look at this particular equation, what we have now shown is whatever the relationship is between these two terms, it's typified by the fact that this difference is this particular limit. In other words, if this limit happens to be 0, then the integral will equal what we're setting out to show it's equal to, namely, this function itself. On the other hand, what we're saying is we do not know that this limit is 0.
By the way, notice what's happening over here. As 'n' goes to infinity, 'delta x' is going to 0. In other words, each individual term in the sum is going to 0, but the number of pieces is becoming infinite. There's your infinity times 0 form here.
And let me show you a case where the pieces are growing too fast in number to be offset by the fact that their size is going to 0. For the sake of argument, let me suppose that 'alpha sub k' happens to be some non-0 constant for all 'k'. If I come back to this expression here, if 'alpha sub k' is equal to a constant, I'll replace 'alpha sub k' by that constant, which is 'c'. I now have what?
That the limit that I'm looking for is the 'sum k' goes from 1 to 'n', 'c' times 'delta x', taking the limit as 'n' goes to infinity. 'c' is a constant, so I can take it outside the integral sign. Since 'c' is a constant and it's outside the integral sign, let's look at what 'delta x' is. 'Delta x' is 'b - a' divided by 'n', same as we were talking about earlier in the lecture. I have 'n' of these pieces. The 'n' in the denominator cancels the 'n' in the numerator when I add these up. And notice that this particular sum here, no matter what 'n' is, is just 'b - a'.
In other words, in the case that 'alpha sub k' is a constant, notice that this limit is 'c' times 'b - a'. 'c' is not 0. 'b' is not equal to 'a'. We have an interval here. Therefore, this will not be 0. And notice that if this is not 0, these two things here are not equal.
And by the way, the aside that I would like to make here is that even though this error is not negligible, notice the fact that if 'alpha sub k' is a constant that as 'delta x' goes to 0, this whole term will go to 0. But it doesn't go to 0 fast enough. In other words, eventually, we're taking this sum as 'n' goes to infinity. And here's a case where, what? The pieces went to 0, but not fast enough to become negligible.
Well, let me give you something in contrast to this. Situation number two is suppose instead 'alpha k' is a constant times 'delta x'. 'B' times 'delta x', where 'B' is a constant. In that case, notice that summation 'k' goes from 1 to 'n', 'alpha k' times 'delta x' is just summation 'k' goes from 1 to 'n', 'B' times ''delta x' squared'.
Now keep in mind again that 'delta x' is still 'b - a' over 'n'. So ''delta x' squared', of course, is 'b - a' over 'n squared'. Notice that what's inside the summation sign here does not depend on 'k'. It's a constant. I can take it outside the summation sign.
How many terms of this size do I have? Well, 'k' goes from 1 to 'n', so I have 'n' of those pieces. Therefore, this sum is given by this. This is an 'n squared' term. One of the 'n's in the denominator cancels with my 'n' in the numerator. And in this particular case, I find that the sum, as 'k' goes from 1 to 'n', 'alpha sub k' times 'delta x', is 'B', which is a constant, times ''b - a' squared', which is also a constant, divided by 'n'.
Now look, if I now allow 'n' to go to infinity, my numerator is a constant. My denominator is 'n'. As 'n' goes to infinity, my denominator increases without bound. My numerator remains constant. So the limit is 0.
In other words, in the case where 'alpha sub k' is a constant times 'delta x', this limit is 0, the error is squeezed out, and, in this particular case, 'w' is given by the integral from 'a' to 'b', ''g of x' dx' exactly in this particular situation.
Well, the question is how many situations shall we go through before we generalize. And the answer is since this lecture is already becoming quite long, let's generalize now without any more details. And the generalization is this. In general, if you break down 'w' into increments, which we'll call 'delta 'w sub k'', and 'delta 'w sub k'' is equal to—well, I've made a little slip here. That should be a 'g' in here. I'm using 'g's rather than 'f's.
If 'delta 'w sub k'' is 'g of 'c sub k'' times 'delta x' plus the correction factor 'alpha k' times 'delta x', and, for each 'k', the limit of 'alpha k' as 'delta x' approaches 0 is 0. In other words, what we're saying is that 'alpha sub k' times 'delta x' must be a higher order infinitesimal. If this is a higher order infinitesimal, if 'alpha k' goes to 0 as 'delta x' goes to 0, that says, what? That 'alpha k' times 'delta x' is going to 0 much faster than 'delta x' itself.
So you compare this with our discussion on infinitesimals earlier in our course. I think that was in block two, but that's irrelevant here. But all I'm saying is if that is the case, in that particular case, the limit, that integral is exactly what we're looking for. The error has been squeezed out.
In other words, now, in conclusion, what we must do in our present problem to answer question number three, remember, we have approximated delta wk by this intricate little thing, '1 + ''f prime 'c sub k' squared' times 'delta x'. In other words, in our particular illustration in this lecture, the role of 'g' is played by the square root of '1 + 'f prime squared''. What we must show is that this difference is a higher order differential. And this really requires much more advanced work than we really want to go into.
The only trouble is, as a student, I always used to be upset when the instructor said, the proof is beyond our ability or knowledge. Whenever he used to say, the proof is beyond our knowledge at this stage of the game, I used to say to myself, ah, he doesn't know how to prove it. I think there's something upsetting about this. So what I'm going to try to do for a finale here is to at least give you a plausibility argument that we really do squeeze the error out in our approximation of 'delta w' in this case.
In other words, let me draw this little diagram to bring in the infinitesimal idea here. Here's my 'delta w'. Here's my 'delta s'. And what I'm doing now is I am going to take the tangent line to the curve at 'A', use that rather than 'delta x'.
In other words, what I'm going to say is we're going to assume that our curve doesn't have infinite oscillations. So I can assume the special case of a monotonically increasing function, use the intuitive approach that in this diagram, 'delta w' is caught between 'delta s' and 'AB' plus 'BC', observing that 'BC' is just what's called 'delta y' minus 'delta y-tan'. And that, by the Pythagorean theorem, 'AB' is the square root of ''delta x' squared' plus ''delta 'y sub tan'' squared', which, of course, can be written this particular way, namely, notice that the slope here is the slope of this curve when 'x' is equal to 'x sub 'k - 1''.
And again, this is written out, so I think you can fill in the details as part of your review of the lecture and your homework assignment. All I want to do here is present a plausibility argument using 'AB', 'AC', and 'delta s' as they occur in this diagram. All we're saying is, look, if we're willing to make the assumption that this curve has the right shape, 'delta w' is squeezed between 'delta s' and 'AB' plus 'BC'.
As we showed on our little inset here, 'AB' is the square root of '1 + ''f prime' evaluated 'x sub 'k - 1'' squared' times 'delta x'. What is 'BC'? Remember, 'BC' was 'delta y' minus 'delta y-tan' . That's just your epsilon 'delta x' of your infinitesimal idea, where the limit of epsilon as 'delta x' approaches 0 is 0.
In fact, let me just come over here and make sure we write that part again. Remember what we saw was that 'delta y-tan' was 'dy/dx' evaluated at the point in question plus what? An error term which was called epsilon 'delta x', where epsilon went to 0 as 'delta x' went to 0. And that's all I'm saying over here.
In other words, where is delta s squeezed between right now? Well, let me put it this way, delta s itself, by definition, is the square root of ''delta x' squared' plus ''delta y' squared'. That we saw was this. That was our beginning definition in fact. Now if you look at our diagram once more, notice that since our curve is always holding water and rising, that the slope of the line 'delta s' is greater than the slope of the line 'AB'.
Putting all of this together, we now have 'delta w' squeezed. And it was not at all trivial in putting the squeeze on 'delta w'. There was no self-evident way of saying just because one region was contained in another, it must have a smaller arc length. We really had to be ingenious in how we put the squeeze in to catch this thing. But in the long run, what we now have shown is what? That 'delta w' is equal to this. With an error of no greater than epsilon 'delta x'.
In other words, the exact delta w is what? It's the square root of ''1 + 'f prime 'x sub 'k - 1'' squared' 'delta x' plus 'alpha delta x', where alpha can be no bigger than epsilon. In other words, this is the maximum error that we have here because it's caught between this. Well, look, as 'delta x' approaches 0, so does epsilon. And since alpha is no bigger than epsilon, it must be that as 'delta x' approaches 0, so does alpha approach 0.
In other words, if we now write 'delta w' in this form, observe that, in line with what we're saying, this is a higher order infinitesimal. And as a result, the intuitive approach can be used as the correct answer. The idea is we could have said earlier, look, why don't we approximate the arc length by the straight line segment that joins the two endpoints of the arc. And the answer is you can do that. But you are really on shaky grounds if you say it's self-evident that all the error is squeezed out in the limit.
This is a very, very touchy thing. In other words, in the same way that 0/0 is a very, very sensitive thing in the study of differential calculus, infinity times 0 is equally as sensitive in integral calculus. The whole upshot of today's lecture, however, is now that we've gone through this whole, hard approach, it turns out that we can justify our intuitive approach of approximating the arc length by straight line segments. At any rate, this concludes our lesson for today. And until next time, good-bye.
Funding for the publication of this video was provided by the Gabriella and Paul Rosenbaum Foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate. | http://ocw.mit.edu/resources/res-18-006-calculus-revisited-single-variable-calculus-fall-2010/part-iv-the-definite-integral/lecture-4-one-dimensional-area/ | 13 |
74 | Coordinate Geometry - GMAT Math Study Guide
Table of Contents
Cartesian Coordinate System
The Cartesian coordinate system, shown below, consists of an X-axis (which runs horizontally) and a Y-axis (which runs vertically). The Cartesian coordinate system, also known as the coordinate plane, is used to graph lines, circles, parabolas, points, and other mathematical objects.
The power of the coordinate plane lies in the use of ordered pairs. The ordered pair (5,-2) refers to the point which has an x value of 5 and a y value of -2. Stated differently, the pairing is in the form (x, y). If two or more points are connected, a line or curve is formed.
The following terms are used when interacting with coordinate planes:
- X-Axis - The horizontal line running through the center of the graph from left to right.
- Y-Axis - The vertical line running through the center of the graph from bottom to top.
- Ordered Pair - The means of identifying a point through its coordinates. The proper notation is: (X, Y) where (0, 0) is the intersection of the x and y-axis.
For example, point A is at (2, 4) since it is horizontally 2 units to the right of the center and it is vertically 4 units above the center.
Similarly, point C is at (-6, -2) since it is horizontally 6 units to the left of the center and it is vertically 2 units below the center.
Point E: (0, 0)
Point B: (-7 , 5)
Point D: (2, -1)
- Origin - The point in the center of the coordinate plane where the x and y axis intersect (0, 0). The origin is point E in this graph.
Each coordinate plane is divided up into four quadrants, labeled below. (Note: Some graphs only show one quadrant. In this case, the other quadrants still exist, but they are merely not shown).
In the first quadrant, both x and y are positive while in the second quadrant x is negative and y is positive. The chart below depicts the sign of x and y [denoted (X, Y)].
Quadrant I: (+, +)
Quadrant II: (-, +)
Quadrant III: (-, -)
Quadrant IV: (+, -)
One property of a line is its slope, which is a measure of the steepness of the line. Every line has a slope defined by rise over run (i.e., the amount the line rises vertically over the amount the line runs horizontally). Rise over run refers the change of the rise (y values) of any two points on the line over the change in the run (x values) of the same two points on the line. For example:
In the above graph, point A is at (-8, 4) and point B is at (8, -4).
There are four types of slope: positive, negative, zero, and undefined.
- Blue Line - Positive Slope
- Red Line - Negative Slope
- Green Line - Slope of Zero
- Brown Line - Undefined Slope
y = mx+b
In the above coordinate planes, the lines appeared without any explanation as to why the line pointed in a certain direction at a certain steepness. The location and slant of a line is determined by an equation. It is the line that graphs out all the points that satisfy this equation. Since this is important, it bears repeating: a line on a coordinate plane is a graphical representation of a series of points that fulfill a mathematical equation.
The standard form in which linear equations which are graphed appear on a coordinate plane is:
y is the y-coordinate (or the number of spaces vertically above or below the x-axis)
m is the slope, or the degree of steepness of the line, as defined above
x is the x-coordinate (or the number of spaces horizontally right or left from the y-axis)
b is the y-intercept, which is the number of units above or below the horizontal axis where the line crosses the vertical axis
Consider the following example:
One means to do this would be to manually generate a list of points that satisfy this equation. For example, if x = 0, y must equal 3; if x = 1, y must equal 5; etc.
However, there is a faster way. According to the y = mx + b formation of a line, m = 2 and b = 3. Consequently, the line being graphed must cross the vertical axis 3 units above the horizontal axis and it must rise vertically 2 units for every 1 unit it runs horizontally.
Horizontal and Vertical Lines
- A horizontal line can be written as y = b since for each value on the line, the y-coordinate will be the same (regardless of the x-coordinate). Since the line does not rise when it runs, the slope, m, is 0.
In the coordinate plane above, the light blue line can be written as: y = 3
- A vertical line can be written as x = n since for each value on the line, the x-coordinate will be the same (regardless of the y-coordinate). However, since the line does not run when it rises, the slope is ∞/0, which is undefined since you cannot divide by zero.
In the coordinate plane above, the red line can be written as: x = 3
Writing the Equation of a Line
It is important to know how to take a pair of points (whether from a graph or from a word problem) and write an equation for a line that satisfies the two points. This process involves solving for m and b in the equation y = mx + b. Consider the following example:
The goal is to take these two points and write an equation in the form y = mx + b that passes through the points.
- Find m, the slope
Slope = rise/run = (6 - [-4])/(3 - [-2]) = 10/5 = 2
- Plug in a point (it does not matter which one) and solve for b
y = 2x + b
6 = 2(3) + b
6 = 6 + b
b = 0
- Plug in m and b to write an equation:
y = 2x
The x and y-intercept are important properties of a line and it is often necessary to find the exact location where a line intersects the x-axis and y-axis. The best means to find an intercept is algebraically.
When a line crosses the x-axis, its y-value will be zero. Consequently, by setting y = 0 and solving for x, the x-coordinate at which the line crosses the x-axis can be found.
The line will cross the x-axis at y = 0, or ordered pair (x, 0).
y = 0 = 2x + 4
-4 = 2x
x = -2
The line will cross the x-axis at x = -2 and y = 0.
When a line crosses the y-axis, its x-value will be zero. As a result, by setting x = 0 and solving for y, the y-coordinate at which the line crosses the y-axis can be found. If an equation is in y = mx+b format, recall that since setting x = 0 yields y = b, the value of b is the y intercept.
The line will cross the y-axis at x = 0, or ordered pair (0, y).
y = -15(0) + 17
y = 0 + 17
y = 17
The line will cross the y-axis at x = 0 and y = 17.
Parallel lines are lines that never intersect. In order to never intersect, two lines must have the same angle (technically called slope). If two lines do not have the same slope, they will eventually intersect. However, if two distinct lines have the same slope, they will never intersect.
The line that is parallel will have the same slope as the line that connects the two points mentioned above.
Slope of line connecting two points: rise/run = (16-4)/(5-1) = 12/4 = 3
Consequently, any line with a slope of 3 will be parallel with the line that connects (1, 4) and (5, 16).
Line A is perpendicular to Line B if Line A intersects Line B at a 90° angle. The most important property of perpendicular lines is as follows:
|Slope of Line A||Slope of Line Perpendicular to Line A|
Distance Between Points
In order to find the distance between two points, either: (1) use the distance formula [to be derived] or (2) draw a triangle and use the Pythagorean theorem.
The distance formula comes from the Pythagorean theorem, as the example below shows:
- The best means to solve this type of a question is by drawing in a triangle and solving for the hypotenuse, which is the distance between points K and L. In order to do this, sketch in a triangle by placing a third point such that a right angle is formed (point N below is such a point):
- By inspection, the location of each point is as follows:
L: (2, 1)
N: (5, 1)
K: (5, 5)
- Find the length of each leg of the right triangle:
LN = 5 - 2 = 3
KN = 5 - 1 = 4
- Use the Pythagorean theorem to find the length of KL:
(KL)2 = (LN)2 + (KN)2 =
(KL)2 = 32 + 42
(KL)2 = 9 + 16
(KL)2 = 25
KL = 5
The process above can be simplified using the following formula:
The coordinates of K are x1, y1 or (5, 5).
The coordinates of L are x2, y2 or (2, 1).
d = KL
Notice that the distance formula immediately above is simply a formulaic representation of the graphical process undertaken above to solve for KL.
Types of GMAT Problems
- Finding the Equation of a Line Given Two Points
Given two points on a line, it is possible to find the equation of that line. Begin by calculating the slope (i.e., the rise over run). Plug the slope in for m in y = mx + b. Solve for b by plugging in one of the ordered pairs for x and y. Finally, substitute the answer for b back into the equation.Which of the following is the equation of a line that goes through the point (10,5) and has an x-intercept of 5.Correct Answer: B
- If a line has an x-intercept of 5, by definition, the line must go through the point (5,0). You now have two points, which will form a line: (5,0) and (10,5).
- Calculate the slope of the line (i.e., rise over run).
Rise = change in y = 5 - 0 = 5
Run = change in x = 10 - 5 = 5
Slope = Rise/Run = 5/5 = 1
- Substitute the slope, 1, in for m.
y = mx + b
y = 1x + b
y = x + b
- Substitute an ordered pair in for x and y and solve for b, which is the intercept. Use (10, 5).
5 = 1*10 + b
5 -10 = b
b = -5
- Note: You could have substituted in the other point too. Use (5, 0).
0 = 1*5 + b
0 -5 = b
b = -5
- Substitute b=-5 into the equation y = x + b
y = x - 5
- Finding the Distance Between Two Points
The Pythagorean Theorem can be used to find the distance between two points. Recall that the Pythagorean Theorem stated, a2+ b2 = c2. Think of a as representing the difference between the x values and b representing the difference between the y values. Then, think of c as the distance between the two points.What is the shortest distance between points (-2, 1) and (2,-2)?Correct Answer: E
- The shortest distance between any two points is a line.
- Points are written in the (x,y) format, where x is the x-coordinate and y is the y-coordinate.
- Calculate the difference between the x and y values of each coordinate.
Difference between y values = 1 - (-2) = 1 + 2 = 3
Difference between x values = (-2) - 2 = -4
- If you start at (2, -2) and travel over -4 units (i.e., travel to the left) and then travel straight up 3 units, you will end up at (-2, 1). This sketches out a right triangle, where the hypotenuse is the length of the shortest distance between the two points (i.e., a straight line between the two points).
- Use the Pythagorean Theorem with a = -4 and b = 3 to solve for this distance between the two points.
a2+ b2 = c2
(-4)2+ 32 = c2
- Solve for c
16 + 9 = c2
25 = c2
c = 5 (Negative 5 cannot be a solution to a length so it is discarded)
- Problems Involving Quadrants
The axes of the coordinate plane separate the space into four quadrants. The first quadrant is where both x and y are positive. The second quadrant has positive y and negative x values. The third quadrant has negative x and y values. Finally, the fourth quadrant has positive x and negative y values.
To determine what quadrant a line goes through, find the x and y intercepts. Then, draw a rough graph of the line.What quadrants do the line y = 2x - 4 go through?Correct Answer: E
- Find the x-intercept and the y-intercept of the line.
To find the x intercept, set y = 0 and solve for x.
0 = 2x - 4
x = 2
To find the y intercept, set x = 0 and solve for y.
y = 0 - 4
y = -4
- Draw the line using the two intercepts.
- Thus, the line goes through the 1st, 3rd and 4th quadrants.
- Find the x-intercept and the y-intercept of the line. | http://www.platinumgmat.com/gmat_study_guide/coordinate_geometry | 13 |
128 | - See The Circle for the distributed file storage system, and see Ring (diacritic) for the diacritic mark.
In Euclidean geometry, a circle is the set of all points in a plane at a fixed distance, called the radius, from a fixed point, called the centre. Circles are simple closed curves, dividing the plane into an interior and exterior. Sometimes the word circle is used to mean the interior, with the circle itself called the circumference. Usually however, the circumference means the length of the circle, and the interior of the circle is called a disk or disc.
In an x-y coordinate system, the circle with centre (x0, y0) and radius r is the set of all points (x, y) such that
If the circle is centered at the origin (0, 0), then this formula can be simplified to
- x2 + y2 = r2.
The circle centered at the origin with radius 1 is called the unit circle.
Expressed in polar coordinates, (x,y) can be written as
- x = x0 + r·cos(φ)
- y = y0 + r·sin(φ).
The slope (or derivate) of a circle can be expressed with the following formula:
All circles are similar; as a consequence, a circle's circumference and radius are proportional, as are its area and the square of its radius. The constants of proportionality are 2π and π, respectively.
In other words:
- Length of a circle's circumference =
- Area of a circle =
The formula for the area of a circle can be derived from the formula for the circumference and the formula for the area of a triangle, as follows. Imagine a regular hexagon (six-sided figure) divided into equal triangles, with their apices at the center of the hexagon. The area of the hexagon may be found by the formula for triangle area by adding up the lengths of all the triangle bases (on the exterior of the hexagon), multiplying by the height of the triangles (distance from the middle of the base to the center) and dividing by two. This is an approximation of the area of a circle. Then imagine the same exercise with an octagon (eight-sided figure), and the approximation is a little closer to the area of a circle. As a regular polygon with more and more sides is divided into triangles and the area calculated from this, the area becomes closer and closer to the area of a circle. In the limit, the sum of the bases approaches the circumference 2πr, and the triangles' height approaches the radius r. Multiplying the circumference and radius and dividing by 2, we get the area, π r².
A line cutting a circle in two places is called a secant, and a line touching the circle in one place is called a tangent. The tangent lines are necessarily perpendicular to the radii, segments connecting the centre to a point on the circle, whose length matches the definition given above. The segment of a secant bound by the circle is called a chord, and the longest chords are those that pass through the centre, called diameters and divided into two radii. The area of a circle cut off by a chord is called a circle segment.
It is possible (Circle points segments proof) to find the maximum number of unique segments generated by running chords between a number of points on the perimeter of a circle.
If only (part of) a circle is known, then the circle's center can be constructed as follows: take two non-parallel chords, construct perpendicular lines on their midpoints, and find the intersection point of those lines. The radius for such a partial circle may be calculated from the length L of a chord, and the distance D from the center of the chord to the nearest point on the circle by various formulas including:
(from a geometric derivation)
- Radius = ((L / 2)2 + D2) / 2D
(from a trigonometric derivation)
A part of the circumference bound by two radii is called an arc, and the area (i.e., the slice of the disk) within the radii and the arc is a sector. The ratio between the length of an arc and the radius defines the angle between the two radii in radians.
Every triangle gives rise to several circles: its circumcircle containing all three vertices, its incircle lying inside the triangle and touching all three sides, the three excircles lying outside the triangle and touching one side and the extensions of the other two, and its nine point circle which contains various important points of the triangle. Thales' theorem states that if the three vertices of a triangle lie on a given circle with one side of the triangle being a diameter of the circle, then the angle opposite to that side is a right angle.
Given any three points which do not lie on a line, there exists precisely one circle whose boundary contains those points (namely the circumcircle of the triangle defined by the points). Given three particular points <(x1,y1), (x2,y2), (x3,y3)>, the equation of this circle is given in a simple way by this equation using the matrix determinant:
A circle is a kind of conic section, with eccentricity zero.
In affine geometry all circles and ellipses become (affinely) isomorphic, and in projective geometry the other conic sections join them. In topology all simple closed curves are homeomorphic to circles, and the word circle is often applied to them as a result. The 3-dimensional analog of the circle is the sphere.
Squaring the circle refers to the (impossible) task of constructing, for a given circle, a square of equal area with ruler and compass alone. Tarski's circle-squaring problem, by contrast, is the task of dividing a given circle into finitely many pieces and reassembling those pieces to obtain a square of equal area. Assuming the axiom of choice, this is indeed possible.
Three-dimensional shapes whose cross-sections in some planes are circles include spheres, spheroids, cylinders, and cones.
- Clifford's Circle Chain Theorems. This is a step by step presentation of the first theorem. Clifford discovered, in the ordinary Euclidean plane, a "sequence or chain of theorems" of increasing complexity, each building on the last in a natural progression. by Antonio Gutierrez from "Geometry Step by Step from the Land of the Incas" | http://www.biologydaily.com/biology/Circle | 13 |
58 | 1. Plotting solids in Maple.
In this exercise we will be plotting surfaces which define certain solids which are determined by a function that gives the height z of the solid depending on the point ( ) in some region of the plane. Let us work with an example where the region in the plane is given by a disc of radius 1 centered at the origin.
First, it is important to be able to describe this region in terms that can be used by Maple to plot the region. If you look up the plot3d command in Maple, you will see that if a region in the plane is described by , ,
where and are expressions in x, then if is an expression in terms of x and y , then determines a surface over the described region, and when z is always positive in this region, we obtain a solid bounded by the -plane, the surface, and vertical lines joining the points on the boundary of the region in the plane to the surface. We can also imagine a solid given by the rule that z varies from an expression k(x,y) below the x,y-plane to an expression h(x,y) above the plane.
The region bounded by the unit circle centered at the origin can be described by , .
Suppose that the cross sections of the solid parallel to the x -axis are given by circles. The first question is how to determine the radius of the circle for a fixed value of x. This turns out to be a fairly easy question to answer, because the radius must be
given by . Thinking of the projection of this circle on the y,z-plane, we must therefore have . Simply solve this expression for z , and you obtain both the upper and lower expressions for the surfaces which bound this solid.
> solve(y^2+z^2 =1-x^2,z);
This means that we can plot upper surface bounding the solid as follows.
> plot3d(sqrt(-y^2+1-x^2),x =-1..1, y=-sqrt(1-x^2)..sqrt(1-x^2), axes=boxed);
Notice that Maple does not do a perfect job of this plot, because the function is changing rapidly near the x,y-plane, so the plot misses some of this part of the surface. We can also display both the upper and lower surfaces on the same graph if we use the plots package and display both plots at once.
Warning, the name changecoords has been redefined
> plot1:=plot3d(sqrt(-y^2+1-x^2),x = -1 .. 1, y = -sqrt(1-x^2) .. sqrt(1-x^2)):
> plot2:=plot3d(-sqrt(-y^2+1-x^2),x = -1 .. 1, y = -sqrt(1-x^2) .. sqrt(1-x^2)):
Notice that the solid described in this way is really a sphere. Maple does not display the complete picture, but we do get a pretty good impression of what it looks like. There are ways to get more complete pictures in Maple, but for this exercise, we will only look at the procedures described above.
Next, let us suppose that we are graphing an equilateral triangle which has a base of length positioned on the y -axis of the y,z -plane, so that two of the vertices are at ( ) and ( ). Because it is equilateral, we know that the the third vertex, can be placed at the point ( ). Then a formula for z in terms of y is . To see that this is true,
let us choose a value of a , and plot the resulting curve.
Suppose a solid lies between planes perpendicular to the x -axis at and . Its cross sections perpendicular to the x- axis run from the semicircle to the semicircle , as in the example above, but this time, let us suppose that the cross sections are equilateral triangles with bases in the x,y -plane.
1) For a fixed value of x , determine the value of a which corresponds to the situation in the example above, where the base has length .
2) Use the formula developed in the example above, to write an explicit formula for z in terms of x and y.
3) Plot a 3d picture of the upper surface bounding this solid.
4) Plot a 3d picture of the upper and lower surfaces bounding this solid. (Note that the lower surface is given by ).
2. Finding the volume of a solid.
In problem 1, we looked at a solid which was given by cross sections which are circles with diameter given by the line segment in the x,y-plane running from the semicircle to the semicircle , for x running from to . The radius of this semicircle is , so the area of each
cross section is given by . Therefore, to compute the total volume of the solid bounded by these curves, we simply compute the integral below.
This result is not so surprising once we reflect on the fact that the solid is a sphere, and the volume of a sphere is given by the formula , where r is the radius of the sphere. In fact, if we generalize the problem slightly, and let x run from -r..r, and y run from the semicircle to the semicircle , then the area of a cross section would be given by , so the volume would be given by
which is just the usual formula for the volume of a sphere.
For the solid in your submission on activity 1, do the following.
1) Give a formula for the area of an equilateral triangle which has a base of length .
2) Use this formula to compute the area of the cross sections perpendicular to the x -axis for your solid.
3) Compute the total volume of the solid.
3. Visualizing solids of revolution.
The hardest part of understanding how to obtain the volume of a solid of revolution is visualizing how the solid is approximated by small disks or washers. In this exercise, we use Maple's 3-d graphics capabilities to help analyze a solid of revolution. First you will have to load the package rev , as follows:
Warning, the name arrow has been redefined
This package contains commands example2 , example3 , example4 . These commands take only one parameter: the number of approximating thin disks/washers. Work with each of these three commands using various values of the parameter.
Error, missing operator or `;`
Notice that you can "tilt and spin" the 3d graph by grabbing with the mouse .
Find the volume of the fifth thin disk/washer for each example with n= 10.
4. Solids of revolution.
Let us consider a simple case where a curve is revolved around the x -axis. Then the set of points ( ) which lie on the surface of the resulting solid are given by the formula . This set of points can be plotted using Maple's implicitplot3d command. To use this command, you first have to load the plots package, as shown below.
Warning, the name arrow has been redefined
Let us pick a simple curve and plot it implicitly. Suppose that , and the domain for x is . In order to use the implicitplot, you will have to select fixed x , y and z domains, which should be chosen large enough to include all the points on the surface. You may have to determine these by experiment. To determine appropriate choices, let us first plot the function on the domain in question.
This suggests that if we choose the intervals , , we should get the complete picture.
Let us plot the surface.
This picture looks pretty terrible, because the grid that Maple used is very coarse. One can improve on the picture by choosing a finer grid, but then the computation takes longer, and the display is harder to manipulate. In fact, it is easy
to crash Maple by choosing too fine a grid, so be warned. Below, is an example of a choice of a grid that gives a modest improvement in the accuracy of the picture.
Let us find the volume of the solid bounded by this surface. This is a simple example of the general procedure of finding the area of a cross section, because the cross sections perpendicular to the x -axis have area given by the formula . Thus the total volume in our example is given by
Consider the solid of revolution given by revolving the graph of the function about the x -axis.
1) Suppose that we choose for the domain. Find appropriate ranges for and in order to plot the
2) Implicitly plot the surface described.
3). Find the volume of the solid of revolution determined by this surface.
5. A volume of revolution.
Consider the volume of the solid obtained by r evolving the region bounded by the curves y = cos(x), y = 0, x = 0 and about the line y=1.
(a) Sketch the planar region that is to be revolved.
(b) Give an explanation of the approximating process that gives rise to the definite integral.
(c) Write the definite integral corresponding to the volume of the solid.
(d) Find the value of this integral.
6. Volumes by the washer method.
The washer method applies to the situation where a region in a plane, bounded by two curves is rotated around a line in the plane. Cross sections will be annuli or washer shaped regions. For example, suppose that a region bounded by the curves and , between and is revolved about the x -axis.
Let , , and . Let us plot this region to see what is going on.
We can plot each of the surfaces determined by the two curves implicitly, and put the two plots together to get a picture of the solid in question.
Now let us compute the volume of the solid bounded by the two curves. Since for all x in our domain, we can use the formula A(x)=Pi*(f(x)^2-g(x)^2) for the area of the cross sections. Thus the volume is given by
Consider the region in the x,y-plane bounded by the curves and , from . Note that these curves are determined by x as a function of y , instead of the other way around.
1) Plot the solid given by revolving the region bounded by these two curves around the y -axis.
2) Determine the volume of the solid you obtained in 1).
3) Express this region by two curves determined by y as a function of x instead of the other way around. Determine the domain for x .
4) Plot the solid given by revolving the region bounded by these two curves around the x -axis.
5) Compute the volume of the solid you plotted in 4). | http://www.uwec.edu/math/Calculus/labs/215/html%20labs/%5BIndex%20215%5D/Integrals%20215%20problem%2008.html | 13 |
65 | Become a fan of h2g2
Have you ever wanted (or had) to know how to calculate the volume of your PC? Or even wondered what the surface area of your doughnut is?
Throughout this entry:
- S is the surface area of an object
- V is its volume
- Π represents Pi
A cuboid is a shape of which all the sides are squares or rectangles, like a matchbox. All cuboids have six faces, each of which has four edges.
- l is the length of the cuboid
- b is its breadth
- h is its height
Calculating the surface area of a cuboid is very simple. Since there are six faces you can calculate the area of each face and then add them together:
S = lb + lb + lh + lh + hb + hb
S = 2lb + 2lh + 2hb
So if you had a cuboid measuring 3cm by 4cm by 8cm, you would calculate the surface area thus:
S = 2 × (3×4) + 2 × (4×8) + 2 × (3×8)
S = 2×12 + 2×32 + 2×24
S = 24 + 64 + 48
S = 136cm2
Calculating the volume is far easier than calculating the surface area. It's simply the height multiplied by the length multiplied by the width.
V = lbh
V = 3 × 4 × 8
V = 96
A cube is a very simple form of cuboid which has edges that are all the same length. All the faces are therefore identical squares. Dice are good examples of cubes.
- l is the length of one edge of the cube
Surface area: Because all the faces are identical, and there are six of them, all we need to do is to calculate the area of one of the faces, and multiply by 6:
S = 6l2
As its name implies, you can calculate the volume of a cube by cubing the length of one of its edges.
V = l3
Any shape that has the same shape and area of cross-section all the way through is a prism. Therefore, a cylinder is a type of prism, as is a cuboid. Because all prisms are different in cross-section, each has a different equation for calculating their surface area. The equation for calculating the volume of a prism, however, is constant. It is the cross-sectional area of the prism multiplied by its length.
- a is the cross-sectional area of the prism
- l is the length of the prism1
V = al
A cylinder is a prism with a circular cross-section. It is a very simple object and can be cut (for mathematical purposes) into three polygons: two circles and a rectangle wrapped around them.
- r is the radius of the cylinder
- l is its length
Since the cylinder is formed from two identical circles and a rectangle, as stated earlier, all we need to do is calculate the areas of each of them and add them together:
S = (Πr2) + (Πr2) + (2Πr × l)
S = 2Πr2 + 2Πrl
Because a cylinder is a prism, calculating the volume is very simple. It is the cross-sectional area (ie the circle either at the top or bottom) multiplied by the height:
V = (l × Πr2)
V = lΠr2
A sphere is an object shaped like a tennis ball. It looks circular when viewed from any direction. This is a very strange object, mathematically, because it is so complex while being extremely simple.
- r is the radius of the sphere
Surface Area: S = 4Πr2
Volume: V = 4/3Πr3
A torus is a 3D shape like a ring donut. It is formed of a cylinder twisted round into a circle.
- a is the radius of the entire torus
- b is the radius of the cylinder (the basic shape before it is twisted into a circle
It sounds somehow funny, but the formula for the surface area of the torus is just like calculating the surface of the cylinder before it is twisted round into a circle (without the top and the bottom, of course). Basically, it's the circumference of the torus through the centre of the cylinder multiplied by the circumference of the cylinder, therefore:
S = 2Π(a-b) x 2Πb
S = 4b(a-b) Π2
The same principle applies for the volume of the torus; it's the circumference of the torus through the centre of the cylinder (that's the length of the cylinder before it is twisted into a torus) multiplied by the cross-sectional area of the cylinder.
V = 2Π(a-b) × Πb2
V = 2(a-b)(Πb)2
A cone is any object that tapers to a point (or apex). So a pyramid is a type of cone, as is a similar object with a 5, 7, or even 9-sided base.
As for a prism, there are many different cones, so there are many different formulae for calculating the surface area. However, the formulae for the standard cone, with a circular base, and for a pyramid, with a square base, will be given here.
The Simple Cone's Surface Area
- r is the radius of the cone
- l is the distance from the edge of the base of the cone to the apex
The equation is very simple:
S = Πrl + Πr2
The Pyramid's Surface Area
- w is the length of one edge of the base of the pyramid
- l is the distance between the centre of one edge of the base and the apex of the pyramid
Because the pyramid can be broken down into a square and four identical triangles, all we need to do is to calculate the area of each of these components and then add them together:
S = w2 + 0.5lw + 0.5lw + 0.5lw + 0.5lw
S = w2 + 2lw
The equation for calculating the volume of a cone is the very same for all cones, no matter whether they have a circular or polygonal base.
- h is the distance from the centre of the base to the apex of the cone
- b is the area of the base
This equation is the same for all cones:
V = hb ÷ 3 | http://www.h2g2.com/approved_entry/A533189 | 13 |
50 | Photos by Josh Keown
I’ve heard that there is a way to calculate the amount of dough needed to make any size of pizza. Can you explain how this is done? A: What you are referring to is the use of our old friend “pi” to calculate the surface area of a circle, and then using that number to develop a dough density number. It may sound confusing, but it really isn’t. Here is the way it’s done.
Let’s say you want to make 12-, 14-, and 16-inch diameter pizzas, and you need to know what the correct dough weight will be for each size. The first thing to do is to pick a size you want to work with (any size at all will work). We’ll assume we opted to work with the 12-inch size. The first thing to do is to make our dough, then scale and ball some dough balls using different scaling weights for the dough balls. The idea here is to make pizzas from the different dough ball weights, and then, based on the characteristics of the finished pizza, select the dough ball weight that gives us the pizza that we want with regard to crust appearance, texture and thickness. Make a note of that weight. For this example, we will say that 11 ounces of dough gives us what we were looking for.
We’re now going to find the dough density number that is all-important in determining the dough weights for the other sizes. Begin by calculating the surface area of the size of pizza you elected to find the dough weight for. In this case, it is a 12-inch pizza. The formula for finding the surface area of a circle is pi x R squared. Pi equals 3.14, and R is half of the diameter. To square it we simply multiply it times itself.
Here is what the math looks like:
3.14 x 6 x 6 (or 36) = 113.04 square inches
To calculate the dough density number, we will need to divide the dough weight by the number of square inches. So, now we have 11 ounces divided by 113.04 = 0.0973106 ounces of dough per square inch of surface area on our 12-inch pizza. This number is referred to as the “dough density number.”
Our next step is to calculate the number of square inches of surface area in each of the other sizes we want to make. In this case we want to make 14- and 16-inch pizzas in addition to the 12-inch pizza.
The surface area of a 14-inch pizza is 3.14 x 49 (7 x 7 = 49) = 153.86 square inches of surface area. All we need to do now is to multiply the surface area of the 14-inch pizza by the dough density number (0.0973106) to find the dough scaling weight for the 14-inch pizza — 153.86 x 0.0973106 = 14.972208 ounces of dough. Round that off to 15 ounces of dough needed to make the 14-inch pizza crust.
For the 16-inch pizza we multiply 3.14 X 64 (8 x 8 = 64) = 200.96 square inches of surface area. Multiply this times the dough density factor to get the dough weight required to make our 16-inch crusts. 200.96 X 0.0973106 = 19.555538 ounces of dough. Round that off to 19.5 ounces of dough needed to make the 16-inch pizza crust.
In summary, the following dough weights will be needed to make our 12-, 14-, and 16-inch pizza crusts: 12-inch (11-ounces); 14-inch (15-ounces): and 16-inch (19.5-ounces).
In addition to being used to calculate dough weights for different size pizzas, this same calculation can be used to find the weights for both sauce and cheese, too.
In these applications, all you need to do is to substitute the dough weight with the sauce or cheese weight found to make the best pizza for you. This will provide you with a specific sauce or cheese weight, which can then be used in exactly the same manner to calculate the amount of sauce or cheese required for any other size pizza you wish to make. As an example, going back to that 12-inch pizza, let’s say we really like the pizza when it has 5 ounces of sauce on it. We already know that a 12-inch pizza has a surface area of 113.04 square inches, so we divide five-ounces by 113.04 = 0.0442321 ounces of sauce per square inch of surface area. Our sauce density number is 0.0442321. We know that the 14-inch pizza has a surface area of 153.86 square inches. So all we need to do is to multiply 153.86 times the sauce density number to find the correct amount of sauce to use on our 14-inch pizza. 153.86 x 0.0442321 = 6.80-ounces of sauce should be used on our 14-inch pizza.
For the 16-inch pizza, we know that it has 200.96 square inches of surface area. So all we need to do is multiply this times the sauce density factor — 200.96 x 0.0442321 = 8.88 ounces of sauce should be used on our 16-inch pizza.
To calculate the amount of cheese to use, again, we will use the 12-inch pizza and experiment with applying different amounts of cheese until we find the amount that works best for us. Then divide this amount by the surface area of our test pizza (a 12-inch, which has 113.04-inches of surface area). Lets say that we found six ounces of cheese to work well in our application. six-ounces divided by 113.04 = 0.0530785-ounce of cheese per square inch of surface area. Our cheese density number is 0.0530785.
A 14-inch pizza has 153.86 square inches of surface area. Multiply this times the cheese density number to find the amount of cheese to add on our 14-inch pizza — 153.86 x 0.0530785 = 8.16-ounces of cheese should be used on our 14-inch pizza.
A 16-inch pizza has 200.96 square inches of surface area. Multiply this times the cheese density number to find the amount of cheese to add on our 16-inch pizza — 200.96 x 0.0530785 = 10.66-ounces of cheese should be used on our 16-inch pizza.
By calculating your dough, sauce and cheese weights for each of your pizza sizes, you will find that your pizzas will bake in a more similar manner, regardless of size, this is especially true if you are baking in any of the conveyor ovens, in which the baking time is fixed, and you want to be able to bake all of your pizza sizes at similar baking times. Typically, this allows us to bake pizzas with one to three toppings on one conveyor, regardless of size, and those pizzas with four or more toppings on another conveyor, again, regardless
If you use a deck or conveyor oven, you will find that your pizzas will bake with greater predictability, and your cost control over your different size ranges will be enhanced, and that can’t hurt in today’s economy.
Tom Lehmann is a director at the American Institute of Baking in Manhattan, Kansas.
| Keep up with the latest trends, profit making ideas, delicious recipes and more. Delivered hot
and fresh to your email every Wednesday. | http://www.pizzatoday.com/magazine/2011-july-dough-doctor | 13 |
59 | Hubert Selhofer, revised by Marcel Oliver
updated to current Octave version by Thomas L. Scofield
Octave is an interactive programming language specifically suited for vectorizable numerical calculations. It provides a high level interface to many standard libraries of numerical mathematics, e.g. LAPACK or BLAS.
The syntax of Octave resembles that of Matlab. An Octave program usually runs unmodified on Matlab. Matlab, being commercial software, has a larger function set, and so the reverse does not always work, especially when the program makes use of specialized add-on toolboxes for Matlab.
octave:1> help eig
|<||smaller||<=||smaller or equal||&||and|
|>||greater||>=||greater or equal|||||or|
octave:1> x12 = 1/8, long_name = 'A String' x12 = 0.12500 long_name = A String octave:2> sqrt(-1)-i ans = 0 octave:3> x = sqrt(2); sin(x)/x ans = 0.69846And here is a script doless, saved in a file named doless.m:
one = 1; two = 2; three = one + two;Calling the script:
octave:1> doless octave:2> whos *** local user variables: prot type rows cols name ==== ==== ==== ==== ==== wd real scalar 1 1 three wd real scalar 1 1 one wd real scalar 1 1 two
Matrices and vectors are the most important building blocks for programming in Octave.
v = [ 1 2 3 ]
v = [ 1; 2; 3 ]
octave:1> x = 3:6 x = 3 4 5 6 octave:2> y = 0:.15:.7 y = 0.00000 0.15000 0.30000 0.45000 0.60000 octave:3> z = pi:-pi/4:0 z = 3.14159 2.35619 1.57080 0.78540 0.00000
A matrix is generated as follows.
octave:1> A = [ 1 2; 3 4] A = 1 2 3 4Matrices can assembled from submatrices:
octave:2> b = [5; 6]; octave:3> M = [A b] M = 1 2 5 3 4 6
There are functions to create frequently used matrices. If , only one argument is necessary.
octave:1> A = [1 2; 3 4]; B = 2*ones(2,2); octave:2> A+B, A-B, A*B ans = 3 4 5 6 ans = -1 0 1 2 ans = 6 6 14 14
While * refers to the usual matrix multiplication, .* denotes element-wise multiplication. Similarly, ./ and .^ denote the element-wise division and power operators.
octave:1> A = [1 2; 3 4]; A.^2 % Element-wise power ans = 1 4 9 16 octave:2> A^2 % Proper matrix power: A^2 = A*A ans = 7 10 15 22
octave:1> A = [1 2 3; 4 5 6]; v = [7; 8]; octave:2> A(2,3) = v(2) A = 1 2 3 4 5 8 octave:3> A(:,2) = v A = 1 7 3 4 8 8 octave:4> A(1,1:2) = v' A = 7 8 3 4 8 8
A\bsolves the equation .
Traditionally, functions are also stored in plain text files with suffix .m. In contrast to scripts, functions can be called with arguments, and all variables used within the function are local--they do not influence variables defined previously.
A function f, saved in the file named f.m.
function y = f (x) y = cos(x/2)+x; end
In Octave, several functions can be defined in a single script file. Matlab on the other hand, strictly enforces one function per .m file, where the name of the function must match the name of the file. If compatibility with Matlab is important, this restriction should also be applied to programs written in Octave.
A function dolittle, which is saved in the file named dolittle.m.
function [out1,out2] = dolittle (x) out1 = x^2; out2 = out1*x; endCalling the function:
octave:1> [x1,x2]=dolittle(2) x1 = 4 x2 = 8 octave:2> whos *** currently compiled functions: prot type rows cols name ==== ==== ==== ==== ==== wd user function - - dolittle *** local user variables: prot type rows cols name ==== ==== ==== ==== ==== wd real scalar 1 1 x1 wd real scalar 1 1 x2
Obviously, the variables out1 and out2 were local to dolittle. Previously defined variables out1 or out2 would not have been affected by calling dolittle.
global name declares name as a global variable.
A function foo in the file named foo.m:
global N % makes N a global variable; may be set in main file function out = foo(arg1,arg2) global N % makes local N refer to the global N <Computation> endIf you change N within the function, it changes in the value of N everywhere.
The syntax of for- and while-loops is immediate from the following examples:
for n = 1:10 [x(n),y(n)]=dolittle(n); end while t<T t = t+h; endFor-loop backward:
for n = 10:-1:1 ...
Conditional branching works as follows.
if x==0 error('x is 0!'); else y = 1/x; end switch pnorm case 1 sum(abs(v)) case inf max(abs(v)) otherwise sqrt(v'*v) end
Approximate an integral by the midpoint rule:
We define two functions, gauss.m and mpr.m, as follows:
function y = gauss(x) y = exp(-x.^2/2); end function S = mpr(fun,a,b,N) h = (b-a)/N; S = h*sum(feval(fun,[a+h/2:h:b])); endNow the function gauss can be integrated by calling:
Loops and function calls, especially through feval, have a very high computational overhead. Therefore, if possible, vectorize all operations.
We are programming the midpoint rule from the previous section
with a for-loop (file name is
function S = mpr_long(fun,a,b,N) h = (b-a)/N; S = 0; for k = 0:(N-1), S = S + feval(fun,a+h*(k+1/2)); end S = h*S; endWe verify that mpr and
mpr_longyield the same answer, and compare the evaluation times.
octave:1> t = cputime; > Int1=mpr('gauss',0,5,500); t1=cputime-t; octave:2> t = cputime; > Int2=mpr_long('gauss',0,5,500); t2=cputime-t; octave:3> Int1-Int2, t2/t1 ans = 0 ans = 45.250
octave:1> for k = .1:.2:.5, > fprintf('1/%g = %10.2e\n',k,1/k); end 1/0.1 = 1.00e+01 1/0.3 = 3.33e+00 1/0.5 = 2.00e+00
Procedure for plotting a function :
x = x_min:step_size:x_max;(See also Section 2.1.)
y = f(x);Important: Since operates element-wise, you must use of the operators .+, .-, .^ etc. instead of the usual +, - and ^! (See Section 2.4.)
octave:1> x = -10:.1:10; octave:2> y = sin(x).*exp(-abs(x)); octave:3> plot(x,y) octave:4> grid
octave:1> x = -2:0.1:2; octave:2> [xx,yy] = meshgrid(x,x); octave:3> z = sin(xx.^2-yy.^2); octave:4> mesh(x,x,z);
Take a matrix and a vector with
Solve the system of equations . Calculate the LU and QR decompositions, and the eigenvalues and eigenvectors of . Compute the Cholesky decomposition of , and verify that .
A = reshape(1:4,2,2).'; b = [36; 88]; A\b [L,U,P] = lu(A) [Q,R] = qr(A) [V,D] = eig(A) A2 = A.'*A; R = chol(A2) cond(A)^2 - cond(A2)
Compute the matrix-vector product of a random matrix with a random vector in two different ways. First, use the built-in matrix multiplication *. Next, use for-loops. Compare the results and computing times.
A = rand(100); b = rand(100,1); t = cputime; v = A*b; t1 = cputime-t; w = zeros(100,1); t = cputime; for n = 1:100, for m = 1:100 w(n) = w(n)+A(n,m)*b(m); end end t2 = cputime-t; norm(v-w), t2/t1Running this script yields the following output.
ans = 0 ans = 577.00
Calculate all the roots of the polynomial
Hint: Use the command compan.
Plot these roots as points in the complex plane and draw a unit circle for comparison. (Hint: hold, real, imag).
bdf6 = [147/60 -6 15/2 -20/3 15/4 -6/5 1/6]; R = eig(compan(bdf6)); plot(R,'+'); hold on plot(exp(pi*i*[0:.01:2])); if any(find(abs(R)>1)) fprintf('BDF6 is unstable\n'); else fprintf('BDF6 is stable\n'); end
Plot the graph of the function
x = -3:0.1:3; [xx,yy] = meshgrid(x,x); z = exp(-xx.^2-yy.^2); figure, mesh(x,x,z); title('exp(-x^2-y^2)');
For each Hilbert matrix where compute the solution to the linear system , ones(n,1). Calculate the error and the condition number of the matrix and plot both in semi-logarithmic coordinates. (Hint: hilb, invhilb.)
err = zeros(15,1); co = zeros(15,1); for k = 1:15 H = hilb(k); b = ones(k,1); err(k) = norm(H\b-invhilb(k)*b); co(k) = cond(H); end semilogy(1:15,err,'r',1:15,co,'x');
Calculate the least square fit of a straight line to the points , given as two vectors and . Plot the points and the line.
function coeff = least_square (x,y) n = length(x); A = [x ones(n,1)]; coeff = A\y; plot(x,y,'x'); hold on interv = [min(x) max(x)]; plot(interv,coeff(1)*interv+coeff(2)); end
Write a program to integrate an arbitrary function in one variable on an interval numerically using the trapezoidal rule with :
For a function of your choice, check, by generating a doubly logarithmic error plot, that the trapezoidal rule is of order .
function S = trapez(fun,a,b,N) h = (b-a)/N; % fy = feval(fun,[a:h:b]); better: fy = feval(fun,linspace(a,b,N+1)); fy(1) = fy(1)/2; fy(N+1) = fy(N+1)/2; S = h*sum(fy); end function y = f(x) y = exp(x); end for k=1:15; err(k) = abs(exp(1)-1-trapez('f',0,1,2^k)); end loglog(1./2.^[1:15],err); hold on; title('Trapezoidal rule, f(x) = exp(x)'); xlabel('Increment'); ylabel('Error'); loglog(1./2.^[1:15],err,'x'); | http://math.jacobs-university.de/oliver/teaching/iub/resources/octave/octave-intro/octave-intro.html | 13 |
78 | To better understand certain problems involving rockets
it is necessary to use some mathematical ideas from
the study of triangles.
Let us begin with some definitions and terminology
which we will use on this slide.
A right triangle is a
three sided figure with one angle equal to 90 degrees. A 90 degree angle is
called a right angle which gives the right triangle its name.
We pick one of the two remaining angles and label it c
and the third angle we label d.
The sum of the angles of any triangle is equal to 180 degrees.
If we know the value of c,
we then know that the value of d:
90 + c + d = 180
d = 180 - 90 - c
d = 90 - c
We define the side of the triangle opposite from the right angle to
be the hypotenuse. It is the longest side of the three sides
of the right triangle. The word "hypotenuse" comes from two Greek words
meaning "to stretch", since this is the longest side.
We label the hypotenuse with the symbol h.
There is a side opposite the angle c which we label o
for "opposite". The remaining side we label a for "adjacent".
The angle c is formed by the intersection of the hypotenuse h
and the adjacent side a.
We are interested in the relations between the sides and the angles of
the right triangle.
Let us start with some definitions.
We will call the
of the opposite side of a right triangle to the hypotenuse
the sine and give it the symbol sin.
sin = o / h
The ratio of the adjacent side of a right triangle to the hypotenuse is called the
cosine and given the symbol cos.
cos = a / h
Finally, the ratio of the opposite side to the adjacent side is called the
tangent and given the symbol tan.
tan = o / a
We claim that the value of each ratio depends only on the value of
the angle c formed by the adjacent and the hypotenuse.
To demonstrate this fact,
let's study the three figures in the middle of the page.
In this example, we have
an 8 foot ladder that we are going to lean against a wall. The wall is
8 feet high, and we have drawn white lines on the wall
and blue lines along the ground at one foot intervals.
The length of the ladder is fixed.
If we incline the ladder so that its base is 2 feet from the wall,
the ladder forms an angle of nearly 75.5 degrees degrees with the ground.
The ladder, ground, and wall form a right triangle. The ratio of the distance from the
wall (a - adjacent), to the length of the ladder (h - hypotenuse), is 2/8 = .25.
This is defined to be the cosine of c = 75.5 degrees. (On
we will show that if the ladder was twice as long (16 feet),
and inclined at the same angle(75.5 degrees), that it would sit twice as
far (4 feet) from the wall. The ratio stays the same for any right triangle
with a 75.5 degree angle.)
If we measure the spot on the wall where the ladder touches (o - opposite), the distance is
7.745 feet. You can check this distance by using the
that relates the sides of a right triangle:
h^2 = a^2 + o^2
o^2 = h^2 - a^2
o^2 = 8^2 - 2^2
o^2 = 64 - 4 = 60
o = 7.745
The ratio of the opposite to the hypotenuse is .967 and defined to be the
sine of the angle c = 75.5 degrees.
Now suppose we incline the 8 foot ladder so that its base is 4 feet from the wall.
As shown on the figure, the ladder is now inclined at a lower angle than in the
first example. The angle is 60 degrees, and the ratio of the adjacent to
the hypotenuse is now 4/8 = .5 . Decreasing the angle c
increases the cosine of the angle because the hypotenuse is fixed
and the adjacent increases as the angle decreases. If we incline the 8 foot
ladder so that its base is 6 feet from the wall, the angle decreases to
about 41.4 degrees and the ratio increases to 6/8, which is .75.
As you can see, for every angle,
there is a unique point on the ground that the 8 foot ladder touches,
and it is the same point every time we set the ladder to that angle.
Mathematicians call this situation a
The ratio of the adjacent
side to the hypotenuse is a function of the angle c, so we can write the
symbol as cos(c) = value.
Notice also that as the cos(c) increases, the sin(c) decreases.
If we incline the ladder so that the base is 6.938 feet from the wall,
the angle c becomes 30 degrees and the ratio of the adjacent to
the hypotenuse is .866.
Comparing this result with example two we find that:
cos(c = 60 degrees) = sin (c = 30 degrees)
sin(c = 60 degrees) = cos (c = 30 degrees)
We can generalize this relationship:
sin(c) = cos (90 - c)
90 - c is the magnitude of angle d. That is why we
call the ratio of the adjacent and the hypotenuse the "co-sine" of the angle.
sin(c) = cos (d)
Since the sine, cosine, and tangent are all functions of the angle c, we can
determine (measure) the ratios once and produce tables of the values of the
sine, cosine, and tangent for various values of c. Later, if we know the
value of an angle in a right triangle, the tables will tell us the ratio
of the sides of the triangle.
If we know the length of any one side, we can solve for the length of the other
Or if we know the ratio of any two sides of a right triangle, we can
find the value of the angle between the sides.
We can use the tables to solve problems.
Some examples of problems involving triangles and angles include the
forces on a model rocket during
powered flight ,
the application of
and the resolution of the
of a vector.
Here are tables of the sine, cosine, and tangent which you can use to solve
Beginner's Guide Home
Exploration Systems Mission Directorate Home | http://www.grc.nasa.gov/WWW/K-12/rocket/sincos.html | 13 |
257 | In mathematics, the cross product, vector product, or Gibbs' vector product is a binary operation on two vectors in three-dimensional space. It results in a vector which is perpendicular to both of the vectors being multiplied and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering.
If either of the vectors being multiplied is zero or the vectors are parallel then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular for perpendicular vectors this is a rectangle and the magnitude of the product is the product of their lengths. The cross product is anticommutative, distributive over addition and satisfies the Jacobi identity. The space and product form an algebra over a field, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket.
Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on the choice of orientation or "handedness". The product can be generalized in various ways; it can be made independent of orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors can be used with a bivector or two-form result. Also, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can in n dimensions take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.
The cross product a × b is defined as a vector c that is perpendicular to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span.
where θ is the measure of the smaller angle between a and b (0° ≤ θ ≤ 180°), ‖a‖ and ‖b‖ are the magnitudes of vectors a and b, and n is a unit vector perpendicular to the plane containing a and b in the direction given by the right-hand rule as illustrated. If the vectors a and b are parallel (i.e., the angle θ between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0.
The direction of the vector n is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the picture on the right). Using this rule implies that the cross-product is anti-commutative, i.e., b × a = −(a × b). By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector.
Using the cross product requires the handedness of the coordinate system to be taken into account (as explicit in the definition above). If a left-handed coordinate system is used, the direction of the vector n is given by the left-hand rule and points in the opposite direction.
This, however, creates a problem because transforming from one arbitrary reference system to another (e.g., a mirror image transformation from a right-handed to a left-handed coordinate system), should not change the direction of n. The problem is clarified by realizing that the cross-product of two vectors is not a (true) vector, but rather a pseudovector. See cross product and handedness for more detail.
The cross product is also called the vector product or Gibbs' vector product. The name Gibbs' vector product is after Josiah Willard Gibbs, who around 1881 introduced both the dot product and the cross product, using a dot (a · b) and a cross (a × b) to denote them.
To emphasize the fact that the result of a dot product is a scalar, while the result of a cross product is a vector, Gibbs also introduced the alternative names scalar product and vector product for the two operations. These alternative names are still widely used in the literature.
Both the cross notation (a × b) and the name cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a · b involves multiplications between corresponding components of a and b. As explained below, the cross product can be defined as the determinant of a special 3×3 matrix. According to Sarrus' rule, this involves multiplications between matrix elements identified by crossed diagonals.
Computing the cross product
Coordinate notation
The standard basis vectors i, j, and k satisfy the following equalities:
which imply, by the anticommutativity of the cross product, that
The definition of the cross product also implies that
- (the zero vector).
These equalities, together with the distributivity and linearity of the cross product, are sufficient to determine the cross product of any two vectors u and v. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors:
Their cross product u×v can be expanded using distributivity:
This can be interpreted as the decomposition of u × v into the sum of nine simpler cross products involving vectors aligned with i, j, or k. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above mentioned equalities and collecting similar terms, we obtain:
meaning that the three scalar components of the resulting vector s = s1i + s2j + s3k = u × v are
Using column vectors, we can represent the same result as follows:
Matrix notation
Using cofactor expansion along the first row instead, it expands to
which gives the components of the resulting vector directly.
Geometric meaning
Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value. For instance,
Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of ‘perpendicularity’ in the same way that the dot product is a measure of ‘parallelism’. Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The opposite is true for the dot product of two unit vectors.
Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive).
Algebraic properties
- If the cross product of two vectors is the zero vector, (a × b = 0), then either of them is the zero vector, (a = 0, or b = 0) or both of them are zero vectors, (a = b = 0), or else both of them are parallel or antiparallel, (a || b), so that the sine of the angle between them is zero, (θ = 0° or θ = 180° and sinθ = 0).
- The self cross product of a vector is the zero vector, i.e., a × a = 0.
- The cross product is anticommutative,
- distributive over addition,
- and compatible with scalar multiplication so that
Distributivity, linearity and Jacobi identity show that the R3 vector space together with vector addition and the cross product forms a Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3).
- The cross product does not obey the cancellation law: a × b = a × c with non-zero a does not imply that b = c. Instead if a × b = a × c:
If neither a nor b − c is zero then from the definition of the cross product the angle between them must be zero and they must be parallel. They are related by a scale factor, so one of b or c can be expressed in terms of the other, for example
for some scalar t.
- If a · b = a · c and a × b = a × c, for non-zero vector a, then b = c, as
so b − c is both parallel and perpendicular to the non-zero vector a, something that is only possible if b − c = 0 so they are identical.
- From the geometrical definition the cross product is invariant under rotations about the axis defined by a × b. More generally the cross product obeys the following identity under matrix transformations:
- The cross product of two vectors in 3-D always lies in the null space of the matrix with the vectors as rows:
- For the sum of two cross products, the following identity holds:
The product rule applies to the cross product in a similar manner:
This identity can be easily proved using the matrix multiplication representation.
Triple product expansion
The cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as
It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order that's an even permutation of the above ordering. The following therefore are equal:
The vector triple product is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula
The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector calculus, is
where ∇2 is the vector Laplacian operator.
Another identity relates the cross product to the scalar triple product:
Alternative formulation
The cross product and the dot product are related by:
The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle θ between the two vectors, as:
the above given relationship can be rewritten as follows:
Invoking the Pythagorean trigonometric identity one obtains:
which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined by a and b (see definition above).
The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b provides an alternative definition of the cross product.
Lagrange's identity
where a and b may be n-dimensional vectors. In the case n=3, combining these two equations results in the expression for the magnitude of the cross product in terms of its components:
The same result is found directly using the components of the cross-product found from:
In R3 Lagrange's equation is a special case of the multiplicativity |vw| = |v||w| of the norm in the quaternion algebra.
If a = c and b = d this simplifies to the formula above.
Alternative ways to compute the cross product
Conversion to matrix multiplication
where superscript T refers to the transpose operation, and [a]× is defined by:
Also, if a is itself a cross product:
Proof by substitution Evaluation of the cross product gives
Hence, the left hand side equals
Now, for the right hand side,
And its transpose is
Evaluation of the right hand side gives
Comparison shows that the left hand side equals the right hand side.
This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors.
This notation is also often much easier to work with, for example, in epipolar geometry.
From the general properties of the cross product follows immediately that
and from fact that [a]× is skew-symmetric it follows that
The above-mentioned triple product expansion (bac-cab rule) can be easily proven using this notation.
The above definition of [a]× means that there is a one-to-one mapping between the set of 3×3 skew-symmetric matrices, also known as the Lie algebra of SO(3), and the operation of taking the cross product with some vector a.
Index notation for tensors
The cross product can alternatively be defined in terms of the Levi-Civita symbol εijk and a dot product ηmi (= δmi for an orthonormal basis), which are useful in converting vector notation for tensor applications:
in which repeated indices are summed over the values 1 to 3. Note that this representation is another form of the skew-symmetric representation of the cross product:
In classical mechanics: representing the cross-product with the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are isotropic. (Quick example: consider a particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation).
The word "xyzzy" can be used to remember the definition of the cross product.
The second and third equations can be obtained from the first by simply vertically rotating the subscripts, x → y → z → x. The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy sequence.
Cross visualization
Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may help you to remember the correct cross product formula.
If we want to obtain the formula for we simply drop the and from the formula, and take the next two components down -
It should be noted that when doing this for the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for , the next two components should be z and x (in that order). While for the next two components should be taken as x and y.
For then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our formula -
We can do this in the same way for and to construct their associated formulas.
Computational geometry
The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. For example, the winding of polygon (clockwise or anticlockwise) about a point within the polygon (i.e. the centroid or midpoint) can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle.
In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by three points , and . It corresponds to the direction of the cross product of the two coplanar vectors defined by the pairs of points and , i.e., by the sign of the expression . In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a negative angle of rotation around from to , otherwise a positive angle. From another point of view, the sign of tells whether lies to the left or to the right of line .
Moment of a force applied at point B around point A is given as:
The cross product occurs in the formula for the vector operator curl. It is also used to describe the Lorentz force experienced by a moving electrical charge in a magnetic field. The definitions of torque and angular momentum also involve the cross product.
The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints.
Cross product as an exterior product
The cross product can be viewed in terms of the exterior product. This view allows for a natural geometric interpretation of the cross product. In exterior algebra the exterior product (or wedge product) of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors a and b, one can view the bivector a ∧ b as the oriented parallelogram spanned by a and b. The cross product is then obtained by taking the Hodge dual of the bivector a ∧ b, mapping 2-vectors to vectors:
This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. Only in three dimensions is the result an oriented line element – a vector – whereas, for example, in 4 dimensions the Hodge dual of a bivector is two-dimensional – another oriented plane element. So, only in three dimensions is the cross product of a and b the vector dual to the bivector a ∧ b: it is perpendicular to the bivector, with orientation dependent on the coordinate system's handedness, and has the same magnitude relative to the unit normal vector as a ∧ b has relative to the unit bivector; precisely the properties described above.
Cross product and handedness
When measurable quantities involve cross products, the handedness of the coordinate systems used cannot be arbitrary. However, when physics laws are written as equations, it should be possible to make an arbitrary choice of the coordinate system (including handedness). To avoid problems, one should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two vectors, one must take into account that when the handedness of the coordinate system is not fixed a priori, the result is not a (true) vector but a pseudovector. Therefore, for consistency, the other side must also be a pseudovector.
More generally, the result of a cross product may be either a vector or a pseudovector, depending on the type of its operands (vectors or pseudovectors). Namely, vectors and pseudovectors are interrelated in the following ways under application of the cross product:
- vector × vector = pseudovector
- pseudovector × pseudovector = pseudovector
- vector × pseudovector = vector
- pseudovector × vector = vector.
So by the above relationships, the unit basis vectors i, j and k of an orthonormal, right-handed (Cartesian) coordinate frame must all be pseudovectors (if a basis of mixed vector types is disallowed, as it normally is) since i × j = k, j × k = i and k × i = j.
Because the cross product may also be a (true) vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a (true) vector and the other one is a pseudovector (e.g., the cross product of two vectors). For instance, a vector triple product involving three (true) vectors is a (true) vector.
A handedness-free approach is possible using exterior algebra.
There are several ways to generalize the cross product to the higher dimensions.
Lie algebra
The cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory.
For example, the Heisenberg algebra gives another Lie algebra structure on In the basis the product is
The cross product can also be described in terms of quaternions, and this is why the letters i, j, k are a convention for the standard basis on R3. The unit vectors i, j, k correspond to "binary" (180 deg) rotations about their respective axes (Altmann, S. L., 1986, Ch. 12), said rotations being represented by "pure" quaternions (zero scalar part) with unit norms.
For instance, the above given cross product relations among i, j, and k agree with the multiplicative relations among the quaternions i, j, and k. In general, if a vector [a1, a2, a3] is represented as the quaternion a1i + a2j + a3k, the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors.
Alternatively, using the above identification of the 'purely imaginary' quaternions with R3, the cross product may be thought of as half of the commutator of two quaternions.
A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from Hurwitz's theorem that the only normed division algebras are the ones with dimension 1, 2, 4, and 8.
Wedge product
In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the wedge product, which has similar properties, except that the wedge product of two vectors is now a 2-vector instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the wedge product in three dimensions after using Hodge duality to map 2-vectors to vectors. The Hodge dual of the wedge product yields an (n−2)-vector, which is a natural generalization of the cross product in any number of dimensions.
The wedge product and dot product can be combined (through summation) to form the geometric product.
Multilinear algebra
In the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor, specifically a bilinear map) obtained from the 3-dimensional volume form,[note 2] a (0,3)-tensor, by raising an index.
In detail, the 3-dimensional volume form defines a product by taking the determinant of the matrix given by these 3 vectors. By duality, this is equivalent to a function (fixing any two inputs gives a function by evaluating on the third input) and in the presence of an inner product (such as the dot product; more generally, a non-degenerate bilinear form), we have an isomorphism and thus this yields a map which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index".
Translating the above algebra into geometry, the function "volume of the parallelepiped defined by " (where the first two vectors are fixed and the last is an input), which defines a function , can be represented uniquely as the dot product with a vector: this vector is the cross product From this perspective, the cross product is defined by the scalar triple product,
In the same way, in higher dimensions one may define generalized cross products by raising indices of the n-dimensional volume form, which is a -tensor. The most direct generalizations of the cross product are to define either:
- a -tensor, which takes as input vectors, and gives as output 1 vector – an -ary vector-valued product, or
- a -tensor, which takes as input 2 vectors and gives as output skew-symmetric tensor of rank n−2 – a binary product with rank n−2 tensor values. One can also define -tensors for other k.
These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity.
The -ary product can be described as follows: given vectors in define their generalized cross product as:
- perpendicular to the hyperplane defined by the
- magnitude is the volume of the parallelotope defined by the which can be computed as the Gram determinant of the
- oriented so that is positively oriented.
This is the unique multilinear, alternating product which evaluates to , and so forth for cyclic permutations of indices.
In coordinates, one can give a formula for this -ary analogue of the cross product in Rn by:
This formula is identical in structure to the determinant formula for the normal cross product in R3 except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1,...,vn-1,Λ(v1,...,vn-1)) have a positive orientation with respect to (e1,...,en). If n is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that n is even, however, the distinction must be kept. This -ary form enjoys many of the same properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments.
In 1773, Joseph Louis Lagrange introduced the component form of both the dot and cross products in order to study the tetrahedron in three dimensions. In 1843 the Irish mathematical physicist Sir William Rowan Hamilton introduced the quaternion product, and with it the terms "vector" and "scalar". Given two quaternions [0, u] and [0, v], where u and v are vectors in R3, their quaternion product can be summarized as [−u·v, u×v]. James Clerk Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education.
In 1878 William Kingdon Clifford published his Elements of Dynamic which was an advanced text for its time. He defined the product of two vectors to have magnitude equal to the area of the parallelogram of which they are two sides, and direction perpendicular to their plane.
Oliver Heaviside in England and Josiah Willard Gibbs, a professor at Yale University in Connecticut, also felt that quaternion methods were too cumbersome, often requiring the scalar or vector part of a result to be extracted. Thus, about forty years after the quaternion product, the dot product and cross product were introduced—to heated opposition. Pivotal to (eventual) acceptance was the efficiency of the new approach, allowing Heaviside to reduce the equations of electromagnetism from Maxwell's original 20 to the four commonly seen today.
Largely independent of this development, and largely unappreciated at the time, Hermann Grassmann created a geometric algebra not tied to dimension two or three, with the exterior product playing a central role. William Kingdon Clifford combined the algebras of Hamilton and Grassmann to produce Clifford algebra, where in the case of three-dimensional vectors the bivector produced from two vectors dualizes to a vector, thus reproducing the cross product.
The cross notation and the name "cross product" began with Gibbs. Originally they appeared in privately published notes for his students in 1881 as Elements of Vector Analysis. The utility for mechanics was noted by Aleksandr Kotelnikov. Gibbs's notation and the name "cross product" later reached a wide audience through Vector Analysis, a textbook by Edwin Bidwell Wilson, a former student. Wilson rearranged material from Gibbs's lectures, together with material from publications by Heaviside, Föpps, and Hamilton. He divided vector analysis into three parts:
First, that which concerns addition and the scalar and vector products of vectors. Second, that which concerns the differential and integral calculus in its relations to scalar and vector functions. Third, that which contains the theory of the linear vector function.
Two main kinds of vector multiplications were defined, and they were called as follows:
- The direct, scalar, or dot product of two vectors
- The skew, vector, or cross product of two vectors
Several kinds of triple products and products of more than three vectors were also examined. The above mentioned triple product expansion was also included.
See also
- Multiple cross products – Products involving more than three vectors
- Cartesian product – A product of two sets
- × (the symbol)
- Exterior algebra
- Here, “formal" means that this notation has the form of a determinant, but does not strictly adhere to the definition; it is a mnemonic used to remember the expansion of the Cross product.
- By a volume form one means a function that takes in n vectors and gives out a scalar, the volume of the parallelotope defined by the vectors: This is an n-ary multilinear skew-symmetric form. In the presence of a basis, such as on this is given by the determinant, but in an abstract vector space, this is added structure. In terms of G-structures, a volume form is an -structure.
- WS Massey (1983). "Cross products of vectors in higher dimensional Euclidean spaces". The American Mathematical Monthly 90 (10): 697–701. doi:10.2307/2323537. JSTOR 2323537. "If one requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and 7-dimensional Euclidean space."
- Jeffreys, H and Jeffreys, BS (1999). Methods of mathematical physics. Cambridge University Press.
- Wilson 1901, p. 60–61
- Dennis G. Zill, Michael R. Cullen (2006). "Definition 7.4: Cross product of two vectors". Advanced engineering mathematics (3rd ed.). Jones & Bartlett Learning. p. 324. ISBN 0-7637-4591-X.
- Dennis G. Zill, Michael R. Cullen (2006). "Equation 7: a × b as sum of determinants". cited work. Jones & Bartlett Learning. p. 321. ISBN 0-7637-4591-X.
- WS Massey (Dec. 1983). "Cross products of vectors in higher dimensional Euclidean spaces". The American Mathematical Monthly (The American Mathematical Monthly, Vol. 90, No. 10) 90 (10): 697–701. doi:10.2307/2323537. JSTOR 2323537.
- Vladimir A. Boichenko, Gennadiĭ Alekseevich Leonov, Volker Reitmann (2005). Dimension theory for ordinary differential equations. Vieweg+Teubner Verlag. p. 26. ISBN 3-519-00437-2.
- Pertti Lounesto (2001). Clifford algebras and spinors (2nd ed.). Cambridge University Press. p. 94. ISBN 0-521-00551-5.
- Shuangzhe Liu and Gõtz Trenkler (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products". Int J Information and systems sciences (Institute for scientific computing and education) 4 (1): 160–177.
- by Eric W. Weisstein (2003). "Binet-Cauchy identity". CRC concise encyclopedia of mathematics (2nd ed.). CRC Press. p. 228. ISBN 1-58488-347-2.
- Lagrange, JL (1773). "Solutions analytiques de quelques problèmes sur les pyramides triangulaires". Oeuvres. vol 3.
- William Kingdon Clifford (1878) Elements of Dynamic, Part I, page 95, London: MacMillan & Co; online presentation by Cornell University Historical Mathematical Monographs
- Nahin, Paul J. (2000). Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age. JHU Press. pp. 108–109. ISBN 0-8018-6909-9.
- Cajori, Florian (1929). A History Of Mathematical Notations Volume II. Open Court Publishing. p. 134. ISBN 978-0-486-67766-8
- E. A. Milne (1948) Vectorial Mechanics, Chapter 2: Vector Product, pp 11 –31, London: Methuen Publishing.
- Wilson, Edwin Bidwell (1901). Vector Analysis: A text-book for the use of students of mathematics and physics, founded upon the lectures of J. Willard Gibbs. Yale University Press
- Hazewinkel, Michiel, ed. (2001), "Cross product", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Cross Product", MathWorld.
- A quick geometrical derivation and interpretation of cross products
- Z.K. Silagadze (2002). Multi-dimensional vector product. Journal of Physics. A35, 4949 (it is only possible in 7-D space)
- Real and Complex Products of Complex Numbers
- An interactive tutorial created at Syracuse University - (requires java)
- W. Kahan (2007). Cross-Products and Rotations in Euclidean 2- and 3-Space. University of California, Berkeley (PDF). | http://en.wikipedia.org/wiki/Cross_product | 13 |
190 | First law of motion: “Every body continues in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed on it.”
Second law of motion: “The change of motion is proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed.”
Third law of motion: “To every action there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.”
Law of gravitation: material objects always attract one another in proportion to the product of their masses and inversely as the square of the distance separating them.
Newton’s laws describe how material objects move and interact, and since we postulate matter in the form of material objects with rest mass, we need only see how the regularities described by Newton’s laws of motion would be explained on the assumption that kinetic energy and potential energy are forms of matter as well. That requires making further assumptions about the specific essential natures of these forms of matter and about space, but as we shall see, it affords genuine, even illuminating, ontological explanations of some aspects of classical physics.
According to our working hypothesis, the motion of a material object with rest mass is due to the kinetic matter attached to it. The kinetic matter must coincide with the same part of space as the material object itself, but in a way that that moves the material object across space as time passes. Each speed and direction of motion for any given material objects would involve a (quantitatively) different variety of kinetic matter (which could be explained ontologically by aspects of how kinetic matter coincides with space, such as its direction and quantity).
Newton’s first law of motion. Newton’s first law is an immediate consequence of this ontological assumption about kinetic matter. Since the kinetic matter that makes the material object move is itself a substance that endures through time with the same essential nature, the object in motion will continue moving at the same speed and in the same direction (unless it interacts with another bit of matter).
What does not change according to the first law of motion is called “velocity,” because it includes two aspects of the object’s motion, its speed and its direction. That is why we assume that, for any given material object, each different speed and each different direction requires a different variety of kinetic matter. The velocity is not the kinetic matter, but just a property of the material object with the kinetic matter, that is, an aspect of the substances constituting the object with rest mass together with its kinetic matter and how both are contained by space. (The three dimensional structure of space makes it possible to represent any velocity mathematically as a certain speed in each of any three mutually perpendicular directions. Quantities that depend on direction in this way are called “vectors.”)
Newton’s first law must be true, if the motion of objects is due to kinetic matter, because all the ways that an object might be thought to change its speed or direction on its own are ontologically impossible. A change in its motion would require kinetic matter of one variety to come into existence and another variety would have to go out of existence as time passes, which substances cannot do. Or it would require the variety of kinetic matter to change its essential nature, which no form of matter can do on its own. Or it would require space to contain kinetic matter in a different way at different locations, which is not compatible with the uniformity of space.
To be sure, in order to explain motion as a form of matter that connects material objects to space in a certain way, the objects must have an absolute velocity, that is, a certain velocity in absolute space. That may seem doubtful in contemporary physics, but it is just what spatiomaterialism entails about the nature of space and that is what is at issue in this ontological explanation of physics.
Notice that the assumption that an object’s velocity is due to its kinetic matter solves a problem that motion otherwise poses for any ontology that that postulates only substances enduring through time. The problem was first posed by Zeno as a paradox about motion. He pointed out that, at each moment, an object must be at rest (as we assume by holding that nothing exists but the present), and he asked, How is motion even possible in that case? If motion is simply how location changes as time passes, motion does not really exist, because the object always has only one location at each moment as it is present. This is not just a puzzle about the continuousness of time and space, because holding that to move is just to have a location that varies continuously with time leaves a problem about why the moving object has a different location the next moment, whereas the object at rest does not. What makes the object in motion different from the object at rest at each moment? To be sure, it is possible to simply assume that the essential nature of all material objects includes the temporally complex property of changing locations again, if it did so the last moment. That is what materialism does in this case (as in the case of every other basic law of physics), and it is not very satisfying, because there is nothing to distinguish the moving object from the one at rest at any moment except where each was the previous moment (which is not something that exists at that moment). If, however, motion is constituted by a bit of kinetic matter that exists in addition to the object with rest mass, then motion is actually a substance that endures through time, and thus, what makes the moving object at any moment different from an object at rest is something that exists at that moment (not just the fact that it has a different position the previous moment).
The first law of motion allows for velocity to change when the material object interacts with another object, and given the forms of matter we are postulating, the only way that a material object can change velocity is for kinetic matter to be transferred to it or from it or both. Somehow the object must come to have a different variety of kinetic matter attached to it. That is basically what interactions do to objects with rest mass. In such an interaction, Newton’s laws say that the object is subject to a force, and our working hypothesis implies that the exertion of a force on the object somehow transfers kinetic matter to and/or from it.
Interactions are something that we expect, given our assumption that material objects are a form of matter that cannot occupy the same place at the same time, because if they can move, they can move to the same location at the same time and something must keep them from being contained by the same part of space. The simplest kind of interaction is a collision of material objects that is elastic, that is, in which nothing changes but the velocities of the material objects that collide. Though collisions of ordinary material objects are mediated by electromagnetic interactions, we can, for present purposes, abstract from the nature of the forces and consider only what happens when material objects collide. We know that they exchange kinetic matter. But we do not know how much is transferred or what effect it has on their velocities. The regularities about such transfers of kinetic matter are what is described by Newton’s second and third laws of motion.
Newton’s second law of motion. Newton’s second law holds that the exertion of a force is what changes the velocity of a material object. Since forces are exerted by other objects, the force on any object has some direction or other, which determines in some way the direction in which the object’s speed changes. It also has a determinate strength and its action on the object has a certain quantity. But how much an object’s speed changes in the direction of any given force depends on another factor, its rest mass, or the quantity of matter embodied in it. That is, what changes when a material object is subject to a force is its momentum, or the product of its velocity and its rest mass.
In the case of material objects composed of many parts with the same rest mass, our working ontological hypothesis offers an explanation of the relevance of rest mass in determining the change of velocity. In order for the composite object to move in a certain way, each of objects of which it is composed (each “atom,” if you will) must move in the same way (assuming that the parts have unchanging spatial relations to one another). Since each part must be moved across space by its own bit of kinetic matter, a force can change the velocity of the whole only by changing the velocity of each part in the same way. Thus, the change in velocity caused by a force varies inversely with the total rest mass of the material object. It must be spread out among all the parts, so to speak. For example, an object with twice as much rest mass has half as much change in velocity, if subjected to the same force. In other words, what changes is not merely its velocity, but its momentum, the product of its velocity and its rest mass.
The second law of motion also holds in the case of elementary material objects with different rest masses. But without a deeper ontological explanation of the nature of kinetic matter and material objects with rest mass, that regularity can only be assumed as part of the essential natures of those forms of matter.
Velocity is not a measure of the amount of kinetic matter, because the change caused by the transfer of kinetic matter to or from an object depends on its rest mass. But it might seem that momentum is the measure of kinetic matter, since it is what changes when kinetic matter is transferred. However, momentum, like velocity, is just a property of the material object with kinetic matter, and we can begin to see why by considering the third law of motion.
Newton’s third law of motion. Newton’s third law describes a more inclusive regularity than the second, for it includes the object that is the source of the force, describing how it is affected as well. This law holds that the action of one object on another is opposed by an equal and opposite action of the other object back on the first. That is, every action of one object on another is actually a symmetrical interaction of the two objects involved. And since what the action changes is momentum, this law says that the change in the momentum of one object is equal and opposite to the change in momentum of the other object. Thus, Newton’s third law of motion entails the conservation of momentum. That is, in any interaction, the sum of the products of the velocity and mass of all the objects involved in the interaction does not change in any direction regardless how the objects may interact.
The conservation of momentum may make it seem that momentum must be the measure of the total quantity of kinetic matter involved. Suppose, for example, that two equally massive objects moving toward one another at the same speed were to collide. Given our working ontological hypothesis, we might try to understand why the two objects rebound from one another by thinking of the interaction as each object transferring its kinetic matter to the other, for that would also explain why both objects come out with velocities in the opposite direction. Each acquires the other object’s kinetic matter. And if the objects had different rest masses and different velocities, this would even explain how much the velocity of each changes.
Momentum cannot, however, be the measure of the amount of kinetic matter, because it is a quantity that depends on the direction of the motion, whereas the quantity of kinetic matter does not. (In other words, momentum is a “vector quantity,” whereas kinetic energy, as a substance, must be a “scalar quantity,” which does not depend on the direction of motion.) To illustrate the problem, suppose that two objects colliding with equal and opposite momentums do not rebound from one another, but simply come to a stop. The latter is compatible with Newton’s third law of motion, because the change in the momentum of one is still equal and opposite to the change in momentum of the other. Each loses an equal and opposite momentum. Action and reaction are symmetrical. But if momentum were the measure of kinetic matter, it would mean that their kinetic matter simply goes out of existence, for their momentums cancel out. And since that is impossible for a substance, momentum cannot be the measure of kinetic matter.
It is no great surprise, of course, that momentum is not the measure of the quantity of kinetic matter on this ontological explanation, for we postulated the existence of kinetic matter in the first place in order to account for kinetic energy. But the foregoing example does bring out the difference between momentum and kinetic energy. It is currently explained only mathematically: in Newtonian physics, momentum is the product of an object’s rest mass and its velocity (mv), whereas its kinetic energy is one-half the product of its rest mass and the square of its velocity (1/2 mv2).
It is a subtle difference, which was not obvious even to classical physicists at first. The difference was not recognized by Cartesians, and Leibniz was so struck by kinetic energy being different from momentum, or mere motion, that he took the existence kinetic energy as evidence of a vis viva, a “force of life” in the object, which helped inspire his belief that atoms are really “monads,” or minds.
The ontological difference between kinetic energy and momentum is that the former is the quantity of a form of matter that can be attached to objects with rest mass and the latter is a quantitative property that material objects have when kinetic matter is attached. Momentum is just an aspect of those two kinds of material substances as they are contained by space, an aspect that depends on the direction of the motion in space. Newton’s second and third laws of motion describe the regularity about how that property changes when material objects interact, including the conservation of momentum. The kinetic energy is, however, part of the substance constituting the object in motion, and so it is conserved because it is a substance.
This is just the beginning of an ontological explanation of the difference between kinetic energy and momentum. Though we can see that they are different, it does not explain the quantitative relationship between them, that is, why kinetic energy varies with the square of velocity, while momentum varies with velocity. That can be explained only later, when we take up a deeper ontological explanation, the quantum theory of matter. There is a more specific nature of kinetic matter that entails momentum being related to kinetic energy as the velocity to the square of velocity.
In the foregoing case, where colliding objects with equal and opposite momentums simply stop, the collision is not elastic, that is, something changes besides the motion of those objects. Instead of dropping out of existence, the kinetic energy is converted into another form of matter (such as potential energy in new forces being exerted among its parts) or transferred to other objects (such as the kinetic energy of the parts of the objects, that is, becoming heat).
Newton’s law of gravitation. Newton’s law of gravitation holds that material objects exert an attractive force on one another that is proportional to the product of their (rest) masses and inversely proportional to the distance between them. But since each object exerts such a force on the other, an object must have a gravitational field around it even when there are no other objects in its neighborhood. There is, in other words, a gravitational force at every location in the space around the material object. Those forces are radially symmetric around the object itself, and their strength declines with the square of the distance from the object.
The gravitational field is explained ontologically by postulating matter in the form of gravitational matter, which is spread out in space around the material object exerting the gravitational force, though its quantity is included, along with matter is some other (yet to be described) forms, as the rest mass of the material object. This affords an obvious ontological explanation of many of the aspects described by Newton’s law of gravitation. Gravitational forces are directed toward the object, since that is the center of the rest mass of the material object that spreads gravitational matter out in space. The forces are radically symmetric, because the object is located in three dimensional space. And the strength to the force falls off with the square of the distance, because that is how fast space spreads out sideways as you move away from the source of the force.
The force of gravity is not given an ontological explanation in classical physics. Instead, it is usually described as just a disposition at each point in space to exert a precise, mathematically described force on any material object (with a certain mass), if it were located at that point. Talk of “dispositions” is a way of predicating regularities of objects as if regularities were just properties of the objects. But that is to leave those regularities unexplained. There is no alternative in classical physics, because it assumed that gravity involves action at a distance (which is implicitly to deny the reality of the space across which it is supposed to act). Talk of gravitation as a disposition is a way of being skeptical about the reality of such forces as anything beyond their effects. This ontological problem was eliminated by Einstein’s general theory of relativity, and that discovery is what we are anticipating by including gravitational energy as a form of matter in this explanation of the truth of classical physics.
Gravitational matter helps explain the truth of the principle of the conservation of mass and energy, however, only by being counted as a negative quantity, that is, as potential energy. The maximum quantity of potential energy is zero, because according to our our ontological explanation of that accounting practice, potential energy is actually part of the matter that is already counted in the rest mass of the material object whose forces are a potential source of kinetic energy.
This theory calls for a deeper explanation of how the matter appears both as a material object, with a definite location and rest mass, and at the same time as force field spread out in the space around that center of mass. We will consider such a theory later, but for now, we must simply recognize that the rest mass includes both forms of matter. And we can use the notion of gravitational potential energy to illustrate further the puzzling relationship between momentum and kinetic energy.
Gravitational forces exist as fields in which forces are exerted continuously over time and material objects change momentum continuously as they move through them. The way in which material objects interact by gravitational forces can be described as a conversion between potential and kinetic energy, and since such conversions are also a way of explaining the interaction of material objects by electric and magnetic forces, I will describe some of its features by considering what happens to a ball thrown upwards in a (nearly) constant gravitational field, such as near the surface of the earth.
The ball has an initial momentum when it leaves the hand that is proportional to its upward velocity. But since its momentum is constantly decreasing as the result of the constant downward gravitational force on it, there is a point at which the ball comes to a stop and starts falling again, after which its downward velocity increases until we catch it. The ball had kinetic energy when it left our hand, but at the top of its trajectory, it has lost all its kinetic energy. And by the time we catch it, the ball has regained kinetic energy. Since kinetic energy is a form of matter, it never simply goes out of existence or comes into existence, but merely changes form. It is converted into potential energy, which the ball has because it is located in a way that enables the gravitational force to accelerate it over some distance, that is, can acquire kinetic energy from those forces as the object moves through the gravitational force field. If we think of it ontologically, we see the ball losing kinetic matter as it rises, but since the distance across which the gravitational force can accelerate the ball increases, it gains potential energy (which increases the rest masses of both ball and earth). And when it falls, it loses potential energy (decreasing rest masses) and acquires kinetic energy. Since the ball has lost all its kinetic energy at the top of its trajectory, when it is at rest, its potential energy at that point must be equal to its kinetic energy at the beginning and end of its trip. The potential energy depends on two factors, the force exerted by the earth on the ball and the ball’s location in that force field. Both are needed to accelerate the ball and give it kinetic energy, and since the force is nearly the same at every location, the potential energy turns out to be proportional to the height to which it rises, that is, to the distance it can fall in the (constant) gravitational field.
This allows us to see, once again, the difference between momentum and kinetic energy. How much faster would we have to throw the ball upward in order for the point at which its stops and starts falling again to be twice as high? It is not necessary to double its velocity, as we would find if we tried. Instead, the initial velocity needs to be increased only by the square root of two (or about 1.4). The reason is that the ball consumes kinetic energy in rising to a certain height in the gravitational field, not momentum, and since kinetic energy varies with the square of the velocity, it is not necessary to double the initial velocity to double kinetic energy). (Likewise the time it takes will also increase only by a factor of the square root of two, since gravity changes its momentum at the same amount each unit of time and the amount of momentum to be changed is only increased by the square root of two.)
The conversion between kinetic and potential energy is basic to classical physics, though the quantities become more complex when we take into account that gravitational forces are not constant, but have a strength that varies inversely with the distance from the center of gravity. But we need not consider all the complexities of the quantitative relations (though these ontological causes must be able to explain them in the end), because we are merely trying to see what is involved in an ontological explanation of the basic laws of classical physics. We have seen how such ontological causes would make Newton’s laws of motion true, and spatiomaterialism is not trivial, like materialism, considering that it implies the existence of kinetic matter (and begins, at least, an explanation of the relationship between momentum and kinetic energy). The one form of matter that has not been described is electromagnetic waves, and that brings us to the explanation of Maxwell’s laws of electromagnetism.
Maxwell’s laws of electromagnetism. The other basic set of laws making up classical physics at the end of the 19th Century were Maxwell’s four laws of electromagnetism. They describe the electric and magnetic forces and how they interact, and these forces can be explained in much the same way as gravitation, that is, as a form of matter that coincides with space by being spread out spread out in space like a field, and yet contained in the rest mass of material objects with electric charges.
Electromagnetism is more complex than the gravitational force, because there are two forces, electric and magnetic, which interact with one another, and there are two opposite electric forces that material objects can have, positive and negative.
Maxwell’s great triumph was to show how the interaction of the electric and magnetic forces can couple them in a way that propagates both across space at a fixed velocity, that is as electromagnetic waves propagating at the velocity of light. Since electromagnetic waves exist independently of all the other forms of mass and energy (and, thus, the other three forms of matter, on this ontological account), there is less room for doubt about these forces being a form of matter.
It is now known that electromagnetic interactions mediate all the non-gravitational interactions among molecules, among atoms in molecules, and even between electrons and protons in atoms. Even the elastic collisions that we took for granted in discussing Newton’s laws of motion are mediated on the micro level by interactions involving both electric and magnetic forces among objects with electric charges. But all these interactions involve events with a unit-like nature which was unexplained until the discovery of quantum mechanics, and we will take them up later (in Change: Quantum mechanics.)
At this point, I will discuss aspects of the regularities described by Maxwell’s laws in an order that adds up to an explanation of electromagnetic waves, and then I will discuss how spatiomaterialism can explain such waves ontologically.
Electric charge. One of Maxwell’s laws describes the electric forces that can be exerted by material objects. When a material object has an electric charge, it exerts a radial force surrounding the center of rest mass whose strength declines with the square of the distance. This is like the force of gravity, except that the electric force acts on other objects because of their electric charges, rather than their mass. And unlike the gravitational force, the electric force can be either attractive or repulsive, depending on whether the other object has an opposite or same electric charge, respectively. The electric force can give such objects kinetic energy (or become another form of energy, such as an electromagnetic wave), and so it is counted as potential energy. But once again, the maximum potential energy is zero, making it a negative quantity when some of it has been consumed.
Spatiomaterialism can explain potential electrical energy ontologically as some of the matter that is counted in the rest masses of the material objects exerting the electric forces. Thus, when potential energy is consumed, the rest masses of the charged objects are less. If we think of the potential energy as a form of electromagnetic matter that is spread out in space around the objects with the electric charges, we can see why the quantity of potential energy varies with the matter.
Objects with opposite charges attract, and their potential energy is maximum when they are far apart from one another, because their electric fields more nearly approximate a spheres (of forces declining with the square of radius), which requires the maximum quantity of electromagnetic matter to constitute them. But when opposite charges are next to one another, their electric fields are mostly neutralized, and the electric field they jointly set up is deformed in a way that requires less electromagnetic matter. In this case, their total rest mass is less than if they were independent of one another.
Objects with like charges repel, and their potential energy is maximum when they are close to one another, because instead of neutralizing one another, their electric fields oppose one another. Though holding them together yields an electric force that is twice as strong as the radial force field they jointly set up, additional electromagnetic matter is required for the two charged particles to have a force repelling them from one another. In this case, their rest masses are greater than they would be if the objects were at a distance from one another.
In either case, in the equations describing these situations, the potential energy is represented as zero when it is maximum, and thus, what is actually a loss of rest mass, which comes from consuming potential energy and converting electromagnetic matter into other forms of matter, is counted as negative potential energy.
The electric field is also more complex than gravitation in another way because of its interaction with the magnetic force. It affects the motion of a charged object in an electric field. For example, in an electric field is set up by a material object too massive to move much, a charged object that is accelerated by it will increase its velocity not only in the direction of the force, but also in a direction perpendicular to both the electric force and the direction of its own motion in the electric field. That is the work of the magnetic force. The magnetic force on the charged object is a function of its velocity through the electric field as well as the strength of the electric field. This effect of electric forces is not mentioned in this first law, but is a consequence of another of Maxwell’s laws.
No magnetic charges. The second law holds that there is no material object with a magnetic charge, even though there are magnetic forces. A material object with a magnetic charge would have a radial force surrounding its center of rest mass which declines with the square of the distance. Instead, as it turns out, magnetic forces occur in fields in which they are all directed around a closed loop, such as a circle.
According to another law, as mentioned above, the magnetic force can arise because of the motion of a material object with an electric charge. For example, when electric charges are moving in a certain direction through space, they set up a magnetic field in which the magnetic forces are aligned in a circle around their direction of motion. (Such a circular field is set up even when the moving electric charges are neutralized locally by opposite charges, as in a wire in which a current is flowing, and the net strength of the electric force is not changing at any point in space in the surrounding space.)
Coupling of magnetic and electric forces. The two remaining aspects of the regularities described in Maxwell’s equations explain electromagnetic waves. One holds that a change in the magnetic field causes a circular electric force around the direction of the magnetic forces. The other holds that a change in the electric field causes a circular magnetic field around the direction of the electric forces. In both cases, the strength of the field being set up varies with how fast the first field changes (and thus indirectly on the strength of the forces). But the directions are reversed (so that an increasing electric force causes a magnetic force, while an increasing magnetic force causes a electric force in the opposite direction). Furthermore, the change in the strength of each force generates a force of the other kind that is related to it spatially in a certain direction, so that changes in the two forces are coupled as a wave that propagates across space at the velocity of light.
An impression of how electromagnetic waves propagate can be gathered by considering how the motion of electric charges generates them. Consider, for example, a current of electrically charged objects in a wire that is changing direction. The current sets up a magnetic force circling the wire, but as the electric charges slow down, the magnetic force declines (because the rate of change in location of the electric charges becomes lower). The decline in the magnetic force field causes an electric force that circles it. But the change in that electric force causes, in turn, a magnetic field around its direction, which is in the opposite direction of the first magnetic field. And the change in the second magnetic field then causes an electric field, this time in the opposite direction. And finally its change causes a magnetic field that is like the one caused by the electric charges in the wire, except that it is located a fixed distance away from the wire which depends on the velocity of light. Thus, the changes in the two forces are coupled in a way that propagates across space at the velocity of light as an electromagnetic wave. And a steady succession of such waves is generated as long as the current in the wire continues to oscillate. That is basically how antennas send electromagnetic waves.
Electromagnetic waves are a form of energy counted in the principle of the conservation of mass and energy, and though the quantitative details are not relevant here, we should consider what our working hypothesis implies about the nature of "electromagnetic matter." The matter involved in these waves is similar to the matter that makes up the electric field of a material object with an electric charge, except that in the electromagnetic wave, the electric force is changing and the changes couple it with a magnetic force that also changes. The forces interact in such a way that they go through complete cycles, putting them in a position to do the same thing over and over again. But the forces they generate are so related to one another in space that the wave moves across space over time at certain fixed velocity, that is, the velocity of light.
The matter constituting electromagnetic waves may not be as different from the electromagnetic matter constituting electric charges as this contrast makes them appear. According to current quantum theory, material objects with electric charges also have a spin angular momentum. Since that is a magnetic force, it suggests that the electric charge may actually be an electric force that is changing cyclically by somehow spinning around an axis. That possibility will lead us to speculate (when discussing quantum mechanics and the basic particles) that the opposite electric charges (positive and negative) differ from one another by being in opposite phases of their cycles wherever they are located in space.
Inherent motion in space. Maxwell deduced the velocity of light in a vacuum from measurable constants mentioned in his laws, and since classical physics assumed that space is absolute, it could hope to explain this implication as the result of electric and magnetic forces being exerted on an extremely elastic substance that was assumed to be at rest in absolute space. They called it the “luminiferous ether” (or “ether,” for short). Since the ether was supposed to be a kind of matter, it seemed plausible to explain the propagation of electric and magnetic forces mechanically, as an interaction between charged particles and the ether, on the model of waves of forces in ordinary material objects. That project did not work out, but that does not mean that space cannot be playing a similar role in the motion of electromagnetic waves.
In recognizing that space is a substance, spatiomaterialism departs from classical physics as well as from materialism. Though classical physics assumed that space is absolute, it did not take space to be a substance that could interact with bits of matter in any way other than providing all the locations where they are could move or be located. In particular, space was not supposed to affect the motion of bits of matter, at least, not in the way other bits of matter can. But since spatiomaterialism has independent reasons for believing in the existence of space as a substance enduring through time (that is, in addition to presentism, reasons deriving from the recognition of the validity of ontological-cause explanations and inferring to the best ontological-cause explanation of the natural world), it has no reason to doubt that space can interact with bits of matter in ways that are quite comparable to the interactions of bits of matter in space. Thus, spatiomaterialism can use space to explain the velocity of light without having to postulate the existence of the ether as an additional kind of matter that coincides with space. We can take talk about the ether to be referring to an aspect of space as a substance. That is what we will do by taking space itself to be the medium of light transmission.
To be the medium of light transmission, space must have an aspect by which it interacts with electric and magnetic forces and carries them across space as electromagnetic waves at a certain velocity. In order to explain how space does so, I will assume that there is an “inherent motion in space.” By “inherent motion,” I mean a further relationship among the parts of space, beyond the geometrical relations we have already assumed, which involves their endurance through time. We have assumed that the parts of space are particular substances, that is, so that each point has an existence that is distinct from all the others and each point endures, like any substance, through time, never coming into existence nor going out of existence. But since only the present moment exists, only one moment in the history of each part of space exists, and that moment in the history of all the parts of space always occurs at the same time. That is how these substances exist together as a world, and it is the wholeness of space that relates the bits of matter it contains as parts of the same world. This temporal aspect of the nature of the parts of space is the ontological foundation for a further relationship among the parts of space. What I am calling the "inherent motion of space" (as our substitute for the "luminiferous ether") is a spatio-temporal relationship among the parts of space.
Such a temporal aspect to space is not only plausible, but also required by the role of space in constituting what happens. If the parts of space did not have a spatio-temporal relationship to one another, they could not affect one another as time passes. Nor could they enable bits of matter to affect one another.
The geometrical relations among the parts of space explains which parts of space can be affected by any other given part, namely, those nearby, then those next to it, and so on. But in order for a change occurring at any one part of space to affect another part of space, the other part of space must change at a later moment. If the effect were immediate, the effect would not be distinct from the cause, and they could not act on one another like particular substances enduring through time. Space would interact with bits of matter as a whole. Thus, let us assume that the rate at which one part of space can affect another part of space as time passes is finite. That would be a maximum velocity by which one part of space can affect other parts of space. I call it the “inherent motion” in space in order to make clear that it is a temporal aspect of the nature of space as a substance.
I think of the "inherent motion" as a motion sweeping through every part of space at the same velocity, both ways in every direction possible in three dimensional space, at every moment. This is how space is an ontological cause, along with the nature of electromagnetic matter, of the velocity of light. That is, we can explain the motion of electromagnetic waves as bits of matter (or so-called “photons’) being carried along by the inherent motion. But there is an inherent motion, even when there are no photons. Indeed, it would be happening, even if there were no matter in the world. In other words, the inherent motion is an aspect of space as a substance.
The postulation of an inherent motion may seem ontologically excessive, since all we need to assume is that the parts of space are so related temporally, as well as geometrically, that there is a maximum rate at which it is possible for what happens to matter at one part of space to affect what happens to matter at another parts of space. Thus, it may be urged that the inherent motion is not real, but merely the velocity of possible effects across space. It is merely a spatio-temporal geometry about space, that is, a geometry describing how the present moment of any one part of space is related to the past or future moments of other parts of space because of the maximum velocity with which events can affect one another. Such an account, it could be argued, would be a better ontological explanation in the end.
Though a spatio-temporal geometry to space may be a sufficient ontological explanation, I will continue to speak of it as the "inherent motion in space." I can take this liberty, because I am not claiming that the more specific natures of matter and space that I am introducing in order to explain the truth of physics are the best possible spatiomaterialist ontological explanation of the basic laws of physics, only that they are a possible spatiomaterialist ontological explanation. That is all that is required for ontological philosophy to make the case for using spatiomaterialism as the foundation for its argument about necessary truths. And I allow myself the liberty of postulating an actual inherent motion in space, because that invokes an image (in rational imagination) that makes it easy to think about an aspect of the essential nature of space that will be central in the following explanation of the laws of contemporary physics. I find it preferable to “spatio-temporal geometry,” because talk of motion brings out vividly the temporal aspect of what might otherwise be seen as a static structure (such as spacetime in Einsteinian relativity). And it emphasizes that it is always happening everywhere in space, connecting the parts of space ontologically in a further way than merely having geometrical relations, a way that is central to the existence of causal connections among events in the world.
As it turns out, nothing turns on the difference between saying that space has a an inherent motion and saying that space has a spatio-temporal geometry, as long as we recognize that we are talking about an aspect of a substance that endures through time and has the opposite nature from matter. The motion of electromagnetic waves (or photons) is only one manifestation of this aspect of the essential nature of space. There will be several others as we proceed, and it will be a somewhat more complex aspect of space by the time we are through, variations in its velocity at different locations in space. It is easier to think about these ontological effects of space by thinking of space as having an inherent motion prior to the motion of photons, because the picture is spatial imagination is more concrete.
The the inherent motion in space is the medium of light transmission, and though it may also be called the "ether," as it was in Newtonian physics, it is ontologically important to keep in mind that it is an aspect of space. The ether was supposed to be an ethereal matter that is at rest everywhere in space, and no such thing is needed in a spatiomaterial world, because when space is a substance, it can interact with bits of matter in much the same way as other bits of matter.
It should be noted, however, that just as it made sense to speak of being at rest in the ether, it will make sense to speak of being at rest relative to the medium of light transmission. In either case, it is the reference frame in which the one-way velocity of light is exactly the same both ways in every direction in three dimensional space. It was assumed in Newtonian physics that being at rest in the ether would be at rest in absolute space, because they assumed that the ether was at rest in absolute space. Though we also assume that there is a reference frame that is at rest relative to the light medium, we will not assume that it is at rest in absolute space, because in order to explain ontologically the truth of the general theory of relativity, we will have to assume that the light medium itself can have a velocity in space. That will be to hold that that inherent motion in space can have a different velocity at different locations. But if you prefer, such talk can always be translated into talk about the spatio-temporal geometry of space as a substance enduring though time.
The basic laws of classical physics can, in sum, be explained ontologically by postulating various forms in which matter can coincide with space as a substance. Those forms of matter are material objects with rest mass, kinetic matter, gravitational matter, and electromagnetic matter (including both matter as electric and magnetic forces and as electromagnetic waves). And they explain the truth of the laws of classical physics in the sense that a world made of such substances enduring through time has aspects (properties, relations and regularities about change) that correspond to those laws.
That is, the laws of classical physics are true because they correspond to an aspect of the world that has been constructed from our assumptions about the basic nature of substances, about space and matter as the two opposite kind of basic substances that make up the world, and about the specific forms of matter that coincide with space. There is, therefore, one way, at least, that a spatiomaterialist ontology can make its basic laws true, which shows that spatiomaterialism is possible, as far as classical physics is concerned.
Thus, we have laid the foundation we will need in order to explain the truth of the basic laws of contemporary physics ontologically. The first step in that project has already been made by postulating an inherent motion in substantival space to explain the velocity of light ontologically. In assuming that light has a medium through which it is transmitted, it may seem that we are resurrecting the "luminiferous ether" of Newtonian physics. But if so, it is no longer a strange form of ethereal matter at rest in space, but an aspect of space itself. Space itself is the medium of light transmission.
Contingent laws: Contemporary physics. In the early 20th Century, revolutions in physics have made it seem impossible for spatiomaterialism to explain the basic laws of physics ontologically. There were two revolutions, Einstein’s two relativity theories and quantum mechanics. The first led to the belief in spacetime, and the second made it seem that processes at the micro-level are indeterministic. These new theories were irresistible in physics, because they were justified by the empirical method in the same way as Newtonian physics had been. They were inferences to the best efficient-cause explanations, where the best depends heavily on making surprising, quantitatively precise predictions that turn out to be true when measurements are made. And both revolutions have been extremely fruitful, leading to surprising predictions in new fields.
Two theories are involved in the Einsteinian revolution: the special theory of relativity, which covers phenomena that occur in material objects with velocities approaching that of light, and the general theory, which is a more accurate account of gravitational phenomena. Together with quantum mechanics, the special theory led to quantum field theory, a more accurate account of electromagnetism, which included the discovery of spin and positively charged electrons. As a gauge field theory, quantum electrodynamics became the model for theories about the two short range forces, the so-called weak and strong (or color) forces, which are responsible for the composition of particles in ordinary material objects, and that has exposed more basic particles of nature, such as quarks and neutrinos. Together with the observation that the universe seems to be expanding (Hubble's law), the general theory is now used to support the big bang theory about the origin and expansion of the universe. In sum, our understanding of every kind of physical phenomenon has been radically enriched by these two revolutions in physics.
There is one way, however, in which these two revolutions do not fit well together. It is often characterized as the main theoretical problem of contemporary physics. Einstein’s general theory of relativity explains gravitation, one of the four basic forces, but it is mathematically quite different from the theories describing the other three forces (electromagnetism, the color force and the weak force). The latter three are formulated as gauge field theories, making it possible to fit them together mathematically, but no one has found a simple way of connecting them with Einstein’s general theory of relativity. Attempts to connect them have led some physicists to believe that there are ten or more dimensions to space!
Notice that this theoretical problem in contemporary physics is basically a mathematical problem. It derives from the so called "holy grail" of physics, which is to discover a single law from which all the laws of physics, describing all the basic forces, can be derived. But the incompatibility between quantum theory and the theory of gravitation is very likely intractable as a mathematical problem.
Physics is crying out for a new approach. That is what ontological philosophy supplies. The solution to the main problem of contemporary physics is an extra benefit of its spatiomaterialist interpretation of contemporary physics.
Each of the basic revolutions of contemporary physics poses, however, a challenge to spatiomaterialism all by itself.
Einstein’s two relativity theories pose a challenge to ontological philosophy, as we have already seen, because they seem to describe a world in which space and time are not absolute. Realism about Einsteinian relativity entails the belief in spacetime, which puts time ontologically on a par with space: each moment in time is supposed to exist alongside every other moment in time, just as each point in space exists alongside every other point in space, as equal parts of an eternal four-dimensional world. But the belief in spacetime is incompatible with spatiomaterialism, because spatiomaterialism holds that only the present moment exists and takes space to be one of two opposite kinds of substances that endure through time. Thus, unless there is a way that Einstein’s special and general theories of relativity can be true in a world where space and time are absolute, ontological philosophy cannot use spatiomaterialism as the foundation for its arguments about what is necessary. Showing how the belief in spacetime could be replaced in a spatiomaterial world was one of the mortgages we took out in order to make this argument, and now the time has come to pay it off.
Quantum theory however, may also seem incompatible with spatiomaterialism. In addition to its apparent denial of determinism, it seems to deny that physical processes are constituted by material substances that coincide with space. Quantum mechanics is often interpreted, at least, as denying that the smallest entities have definite locations and as implying that they behave in ways that are incompatible with the principle of local motion and local action.
Quantum mechanics is less challenging than Einsteinian relativity, because the received interpretation of it (the so-called “Copenhagen interpretation, due mainly to Bohr) is more like skepticism about ever knowing the real nature of the smallest bits of matter than a generally accepted ontological belief about what exists on the micro-level that is incompatible with spatiomaterialism. The belief in spacetime is incompatible with the belief in absolute space and time.
It is possible, however, for spatiomaterialism to explain the truth of both theories. What is more, by explaining their truth ontologically, it solves the problem about how gravitation is related to the other three forces of nature. This ontological solution to the basic theoretical problem of contemporary physics will also provide the foundation for more speculative suggestions about cosmology, both the basic particles recognized by high energy physics and about the origin of the large scale structure of the universe.
Relativity theories. The two theories involved in Einsteinian revolution will be discussed in sequence. The notion of spacetime was introduced with the special theory of relativity as a way of explaining measurements made from objects with very high relative velocities, and Einstein used it as the basis for his explanation of gravitation. In a parallel way, the ontological explanation of spacetime in the special theory of relativity will be the foundation for the ontological explanation of the role of spacetime in the general theory of relativity.
In the case of Einstein’s special theory of relativity, it may not be surprising that it is possible for spatiomaterialism to explain its truth, for even Einsteinians admit that the empirical implications of Einstein’s theory could be explained on the assumption that space is absolute. It is just a matter of assuming that one of all possible inertial reference frames is at absolute rest and explaining the appearance that it is not different from the others on the assumption that absolute space causes certain distortions in material objects that move through it. Such a theory is possible, and it was begun, at least, by Newtonian physicists before Einstein first published his special theory of relativity.
The ontological explanation of Einstein’s general theory of relativity may be more surprising, because contemporary physicists apparently do not even suspect that it is possible to understand the gravitational phenomena discovered by Einstein on the assumption that space and time are absolute. The universal acceptance of the special theory of relativity and its notion of spacetime as a description of the nature of space and time has kept physicists from even considering a very simple, intuitively satisfying, ontological explanation of gravitation.
The spatiomaterialist special and general theories of relativity that result are not ontologically necessary truths, according to ontological philosophy, because they do not follow from spatiomaterialism, but rather depend on what has been discovered empirically about what happens in the world. All that needs to be shown is that it is possible for Einstein’s two theories to be true in a spatiomaterial world.
Once the laws of physics are explained ontologically, the additional assumptions that must be made about the nature of matter and space in order to explain them will be incorporated into the foundation of ontological philosophy as a way of explaining ontologically other aspects of the world, such as the global regularities. That is how we incorporate the laws of physics into spatiomaterialism. But since those further explanations will depend on the more specific natures of matter and space assumed here in order to explain the truth of classical and contemporary physics, their ontological necessity will be only conditional. They hold only of all possible spatiomaterial worlds like ours, that is, in which the laws of physics are true.
As it happens, however, the spatiomaterialist ontological explanation of the truth of classical physics together with its explanation of quantum mechanics seem to entail the ontological assumptions that have to be made in order to explain the truth of the special theory of relativity. If so, the regularities described by Einstein's special theory of relativity have a deeper ontological explanation, even if they are not unconditionally ontologically necessary.
It should be mentioned, however, that the explanation of the global regularities to be given under Change does not depend on this ontological explanation of the truth of contemporary physics. Given that space is a substance, they depend only on matter obeying the regularities described by the laws of contemporary (and classical) physics. Though we shall make further assumption about the nature of space and matter in order to explain ontologically the truth of quantum mechanics, the basic objects of physics, and the origin of the universe, they are required only to show the possibility of spatiomaterialism. They are not relevant in explaining the global regularities. | http://www.twow.net/ObjText/OtkCaLbC.htm | 13 |
50 | In this chapter you will learn how radio waves spread
themselves over the face of the earth, carrying messages
and music from Washington to Saipan and from San
Diego to Suez in less than a second.
RADIO WAVES COVER THE EARTH
With your own broadcast receiver, you have probably
made a few observations on the behavior of radio waves.
The 1,000-watt station 700 miles away may come in
clearer and stronger than a 5,000-watt station only 400
miles away. You know also that your receiver does not
pick up stations as well in daylight as it does at night,
and that you get more distance and better reception in
the winter than you do in the summer. Again you may
have observed that some places in the United States are
good radio locations, while others are naturally poor places
for receiving radio programs.
THE RADIO WAVE
When it leaves a vertical antenna, the radio wave
resembles a huge doughnut lying on the ground, with the
antenna in the hole at the center. Part of the wave
moves outward in contact with the ground to form the
GROUND WAVE, and the rest of the wave moves upward
and outward to form the SKY WAVE. This is illustrated
in figure 148.
Figure 148.-Formation of the ground wave and sky wave.
The GROUND and SKY portions of the radio wave a.
responsible for two different METHODS of carrying the
messages from transmitter to receiver.
The GROUND WAVE is used for SHORT-RANGE COMMUNICATION at high frequencies with low power, and for LONG-RANGE COMMUNICATION at low frequencies and very high
power. Day-time reception from most commercial stations is carried by the ground wave.
The SKY WAVE is used for long-range, high-frequency
daylight communication. At night, the sky wave provides a means for long-range contacts at LOWER
THE GROUND WAVE
The ground wave is made up of four parts-DIRECT,
GROUND-REFLECTED, TROPOSPHERIC, and SURFACE waves.
The relative importance and use made of each part is dependent on several factors. The chief factors are-frequency, distance between the transmitting and receiving
antennas, height of the antenna, the nature of the ground
over which the wave travels, and the condition of the
atmosphere at the lower levels.
The DIRECT WAVE travels directly from the transmitting antenna to the receiving antenna. For example-two airplanes are several thousand feet in the air and
only a few miles apart. This direct wave is not influenced by the ground, but may be affected by the
atmospheric conditions through which the wave travels.
The GROUND-REFLECTED WAVE permits two airplanes
several miles distant and at low altitudes to communicate
with each other. The wave arrives at the receiving antenna after being reflected from the earth's surface.
When the airplanes are close enough and at the correct
altitude to receive BOTH direct waves and ground-reflected
waves, the signals may be either reinforced or weakened,
depending upon the relative phases of the two waves.
The TROPOSPHERIC WAVE is the part of the wave that is
subject to the influences of the atmosphere at the low
altitudes. The effects of the atmosphere on this type of
wave propagation are most pronounced at frequencies
above the high end of the H-F band. Communication by
the use of the tropospheric wave is gaining in importance,
both from the standpoint of its usefulness, and its frequent unpredictable ranges. This type of communication is discussed in more detail later in this chapter.
The SURFACE WAVE brings most of the low and medium
frequency broadcasts to your receiver. These frequencies are low enough to permit this wave to follow the
surface of the earth. The intensity of the surface wave
decreases as it moves outward from the antenna. This
ATTENUATION-rate of decrease-is influenced chiefly by
the conductivity of the ground or water and the frequency
of the wave.
As it passes over the ground, the surface wave induces a
voltage in the earth, setting up eddy currents. The
ENERGY to create these currents is PIRATED or taken away
from the surface wave. In this way, the surface wave
is weakened as it moves away from the antenna. increasing the frequency rapidly increases the rate of attenuation. Hence surface wave communication is limited to
the lower frequency.
Shore establishments are able to furnish long-range
ground-wave communication by using frequencies between about 18 and 300 kc. with
EXTREMELY HIGH POWER.
Since the electrical properties of the earth over which
the surface waves travel are relatively constant, the signal
strength from a given station at a given point is nearly
constant. This holds true in practically all localities, except those that have distinct rainy and dry seasons.
There, the difference in the amount of moisture will cause
the soil's conductivity to change.
It is interesting to note that the conductivity of salt
water is 5,000 times as great as that of dry soil. The
superiority of surface wave conductivity by salts water
explains why high-power, low-frequency transmitters are
located as close to the edge of the ocean as practicable.
Do not think that the surface wave is confined to the
earth's surface only. It also extends a considerable distance up into the air, but it drops in intensity as it rises.
THE SKY WAVE
In behavior, the SKY WAVE is quite different from the
ground wave. The part of the expanding lobe that moves
toward the sky "bumps" into an IONIZED layer of atmosphere, called the IONOSPHERE, and is bounced or bent back
toward the earth. If your receiver is located in the area
where the returning wave strikes, you will receive the
program clearly even though you are several hundred
miles beyond the range of the ground wave.
The ionosphere is found in the rarified atmosphere, approximately 30-350 miles above the earth. It differs
from the other atmosphere in that it contains a higher
percentage of positive and negative ions.
The ions are produced by the ultra violet and particle
radiations from the sun. The rotation of the earth on
its axis, the annual course of the earth around the sun,
and the development of SUN-SPOTS all affect the number
of ions present in the ionosphere, and these in turn affect
the quality and distance of radio transmission.
You must understand that the ionosphere is constantly
changing. Some of the ions are re-combining to form
atoms, while other atoms are being split to form ions.
The rate of formation of ions and recombination depends
upon the amount of air present, and the strength of the
At altitudes above 350 miles, the particles of air are too
sparse to permit large-scale ion formation. At about 30
miles altitude, few ions are present because the rate of recombination is too high. Also few ions are formed, because the sun's radiations have been materially weakened
by their passage through the upper layers of the ionosphere with the result that below 30 miles, too few ions
exist to affect materially sky wave communication.
LAYERS OF THE IONOSPHERE
Different densities of ionization make the ionosphere
appear to have layers. Actually there is no sharp
dividing line between layers. But for the purpose of discussion a sharp demarkation is indicated.
The ionized atmosphere at an altitude of between 30
and 55 miles is designated as the D-LAYER. Its ionization
is low and has little effect on the propagation of radio
waves except for the ABSORPTION of energy from the
radio waves as they pass through it. The D-layer is
present only during the day. This greatly reduces the
field intensities of transmissions that must pass through
The band of atmosphere at altitudes between 55 and
90 miles contains the E-LAYER. It is a well-defined band
with greatest density at an altitude of about 70 miles.
This layer is present during the daylight hours, and is
also present in PATCHES, called "SPORADIC E," both day
and night. The maximum density of the regular E-layer
appears at about noon, local time.
The ionization of the E-layer at the middle of the day
is sufficiently intense to refract frequencies up to 20 mcs.
back to the earth. This is of great importance to daylight transmissions for distances up to 1,500 miles.
The F-layer extends from the 90-mile level to the upper
limits of the ionosphere. At night only one F-layer is
present. But during the day, especially when the sun is
high, this layer separates into two parts, F1 and F2, as
illustrated in figure 149.
As a rule, the F2-layer is at its greatest density during
early afternoon hours. But there are many notable
Figure 149 -E-layer and F-layer of the ionosphere.
exceptions of maximum F2 density existing several hours
later. Shortly after sunset, the F1- and F2-layers recombine into a single F-layer.
SPORADIC E LAYER
In addition to the layers of ionized atmosphere that
appear regularly, erratic patches occur at E-layer heights
much as clouds appear in the sky. These clouds are referred to as SPORADIC-E IONIZATIONS. These patches often
are present in sufficient number and intensity to enable
good radio transmission over distances where it is not
Sometimes sporadic ionizations appear in considerable strength at varying altitudes, and actually prove
harmful to radio transmissions.
EFFECT OF IONOSPHERE ON THE SKY WAVE
The ionosphere has three effects on the sky wave. It
acts as a CONDUCTOR, it absorbs energy from the wave,
and it REFRACTS or bends the sky wave back to the earth
as illustrated in figure 150.
Figure 150.-Refraction of the sky wave by the ionosphere.
When the wave from an antenna strikes the ionosphere,
the wave begins to bend. If the frequency is correct, and
the ionosphere sufficiently dense, the wave will eventually
emerge from the ionosphere and return to the earth. If
your receiver is located at either of the points B, in
figure 150, you will receive the transmission from point A.
Don't think that the antenna reaches as near the
ionosphere as is indicated in figure 150. Remember the
tallest antenna is only about 1,000 feet high.
The ability of the ionosphere to return a radio wave to
the earth depends upon the ANGLE at which the sky wave
strikes the ionosphere and upon the FREQUENCY of the
For discussion, the sky wave in figure 151 is assumed
to be composed of four rays. The angle at which ray 1.
strikes the ionosphere is too nearly vertical for the ray
to be returned to the earth. The ray is bent out of line,
but it passes through the ionosphere and is lost.
The angle made by ray 2 is called the CRITICAL ANGLE
for that frequency. Any ray that leaves the antenna at
in angle GREATER than theta (θ) will penetrate the
Ray 3 strikes the ionosphere at the SMALLEST ANGLE
that will be refracted and still return to the earth. Any
smaller angle, like ray 4, will be refracted toward the
earth, but will miss it completely.
As the FREQUENCY INCREASES, the size of the CRITICAL
ANGLE DECREASES. Low frequency fields can be projected
straight upward and will be returned to the earth. The
HIGHEST FREQUENCY that can be sent directly upward and
still be returned to the earth is called the CRITICAL Frequency. At sufficiently high frequencies, the wave will
not be returned to the earth, regardless of the angle at
which the ray strikes the ionosphere.
The critical frequency is not constant. It varies from
one locality to another, with differences in time of day,
with the season of the year, and according to sunspot
This variation in the critical frequency is the reason
why you should use issued predictions-FREQUENCY
Figure 151.-Effect of angle of refraction on sky wave.
TABLES or NOMOGRAMS-to determine the MAXIMUM
USABLE FREQUENCY (MUF) for any hour of the day.
Nomograms and frequency tables are prepared from
data obtained experimentally from stations scattered all
over the world. All this information is pooled and you
get the results in the form of a long-range prediction that
removes most of the guess work from radio communication.
Refer again to figure 151. The area between points B
and C will receive the transmission via the REFRACTED SKY
WAVE. The area between points A and E will receive its
signals by GROUND WAVE. All receivers located in the
SKIP ZONE between points E and B will receive NO transmissions from point A, since neither the sky wave nor
the ground wave reaches this area.
EFFECT OF DAYLIGHT ON WAVE PROPAGATION
The INCREASED IONIZATION during the day is responsible for several important changes in sky-wave transmission-
First-It causes the sky-wave to be returned to the
earth NEARER to the point of transmission.
Next-The EXTRA ionization increases the ABSORPTION
of energy from the sky-wave. If the wave travels a sufficient distance into the ionosphere, it will lose all its
Figure 152.-Effect of daylight on medium-frequency sky-wave transmission.
And-The presence of the F1- and E-layers with the
F2-layer make long-range, high-frequency communication possible by all three layers, provided the correct
frequencies are used.
In figure 152, you see the results of daylight in increasing refraction and absorption. These two factors
usually combine to reduce the effective daylight communication range of low-frequency and medium-frequency transmitters to surface wave ranges.
HIGH-FREQUENCY, LONG-RANGE COMMUNICATION
The high ionization of the F2-layer during the day,
enabling refraction of high frequencies which are not
greatly absorbed, has an important effect on transmissions of the HF band. Figure 153 shows how the F2-layer
Figure 153.-Effect of the F2-layer on transmission of high-frequency signals.
completes the refraction and returns the transmissions
of these frequencies to the earth, making possible long-range,
high-frequency communication during the daylight
The waves are partially bent in going through the
E-layer and F1-layer, but are not returned to the earth
until the F2-layer completes the refraction. At night,
when only one layer is present, very-high-frequency
waves may pass right through the ionosphere.
The EXACT FREQUENCY to be used to communicate with
another station depends upon the condition of the
ionosphere and upon the distance between stations. Since
the ionosphere is constantly changing, you must use the
nomograms and tables to pick the correct frequency for
desired distance at a given time of day.
Many times the REFRACTED WAVE will return to the
earth with enough energy to be bounced back up to the
ionosphere, and then be refracted back to the earth a
In figure 154, the ray strikes the earth at point A
with sufficient force to be reflected back to the ionosphere
Figure 154.-Multiple refraction and reflection of a sky wave.
and then refracted back to the earth a second time. Occasionally a sky wave has sufficient energy to be refracted
and reflected several times, thus greatly increasing the
range of transmission.
FADING is the result of variations in signal strength at
the receiver. There are several causes. Some are easily
understood, others are more complicated.
One cause is probably the direct result of interference
between single-hop and double-hop transmissions. If the
two waves arrive IN PHASE, the signal strength will be
increased, but if the phases are opposed, they will cancel
each other and weaken the signal.
Interference fading is also severe in regions where the
ground and skywave are in contact with each other. This
is especially true if the two are approximately of equal
strength. Fluctuations of the sky wave with a steady
ground wave can cause worse fading than sky-wave
The way the waves strike the antenna and the variations in absorption in the ionosphere are also responsible
for fading. Occasionally, sudden ionospheric disturbances will cause complete absorption of all sky-wave
Receivers that are located near the outer edge of the
skip zone are subjected to fading as the sky wave alternately strikes and skips over the area. This type of
fading is sometimes so complete that the signal strength
may fall to near zero level.
FREQUENCY BLACKOUTS are closely related to some types
of fading, but this fading is complete enough to blot-out
the transmission completely.
Changing conditions in the ionosphere shortly before
sunrise and after sunset may cause complete BLACKOUTS
at certain frequencies. The HIGHER frequencies pass
through the ionosphere, while the LOWER ones are absorbed by it.
IONOSPHERIC STORMS-turbulent conditions in the ionosphere-often cause communication to be erratic. Some
frequencies will be completely blotted out, while others
may be reinforced. Sometimes these storms develop in a
few minutes, and at other times they require as much as
several hours. A storm may last several days. You can
expect these storms to recur at about every 27 days.
When frequency blackouts occur, you will have to be
on the ball to prevent complete loss of contact with other
ships or stations. When the storms are severe, the critical frequencies are much lower, and the absorption in the
lower layers of the ionosphere is much higher.
V. H. F. AND U. H. F. COMMUNICATION
In the recent years, there has been a trend toward the
use of frequencies above 30 mc., for short-range, ship-to-ship,
and ship-to-airplane communications.
Early concepts suggested that these transmissions traveled in straight lines. This naturally leads to the assumption that the V.H.F. transmitter and receiver must
be within sight of each other to supply radio contact.
Extensive use and additional research show the early
"line-of-sight" theory to be frequently in error because
radio waves of these frequencies are refracted. The
transmitter does not always need to be in sight of the
This type of communication still is called by its popular name, "Line of sight transmission." But it is better
to call it V.H.F. and U.H.F. transmission.
It is true that U.H.F. and V.H.F. waves follow approximately straight lines, and large hills or mountains cast a
radio shadow over areas in much the same way as light
creates a shadow. A receiver located in shadow will receive a weakened signal, and in some cases, no signal
In theory, the range of contact is the distance to the
horizon, and this distance is determined by the heights
of the two antennas. But communication is often possible many miles beyond the assumed horizon range. Be
sure to remember this point when your ship is in waters
where radio security is essential.
EFFECT OF ATMOSPHERE ON V. H. F. AND U. H. F.
The abnormal ranges of V.H.F. and U.H.F. contacts
are caused by abnormal atmospheric conditions within a
few miles of the earth. Normally, you will find the
warmest air near the surface of the water. The air
gradually becomes cooler as you gain altitude. However,
unnatural situations often develop where WARM bands of
air are above the COOLER layers. This unusual situation
is called a TEMPERATURE INVERSION.
Whenever TEMPERATURE INVERSIONS are present, the
AMOUNT OF REFRACTION-called INDEX OF REFRACTION-is
different for the air trapped WITHIN the inversion than it
is for the air outside the inversion.
The differences in the index of refraction form CHANNELS or DUCTS that will pipe V.H.F. and U.H.F. signals
many miles beyond the assumed normal range.
Figure 155.-Duct effect on V.H.F. and U.H.F. transmissions.
Sometime these ducts will be in contact with the water
and may extend a few hundred feet into the air. At
other times the duct will start at an elevation of about
500 to 1,000 feet, and extend an additional 500 to 1,000
feet in the air.
If an antenna extends into the duct or if. wave motion
lets the wave enter a duct after leaving an antenna, the
transmission may be conducted long distances to another
ship whose antenna extends into the duct. This is illustrated in figure 155.
WHEN WILL DUCTS BE FORMED?
When operating this high-frequency equipment, you
must be able to recognize the weather conditions that lead
to DUCT FORMATIONS. Since the duct is not visible to the
eye and since complete aerological information is not always available, you must rely on a few simple visible
evidences and a lot of common sense.
The following rules have exceptions, but you can expect a duct to be formed when-
1. A wind is blowing from land.
2. There is a stratum of quiet air.
3. There are clear skies, little wind, and high
4. A cool breeze is blowing over warm open ocean,
especially in the tropic areas and in the trade-wind
5. Smoke, haze, or dust fails to rise, but spreads out
6. Your receiver is fading rapidly.
7. The moisture content of the air at the bridge is considerably less than at the sea's surface.
8. The temperature at the bridge is 1 or 2 degrees F.
HIGHER than at the sea's surface.
GENERAL USE OF FREQUENCIES
Each frequency band has its own special uses. The
uses depend upon the nature of the waves-surface, sky,
or space-and the effect that the sun, the earth, the
ionosphere, and the atmosphere have upon them.
It is almost impossible to lay down fixed rules for the
use of what frequency for what purpose. Some general
statements can be made, however, on what FREQUENCY
BANDS are best used for what purposes. COMINST, in
Article 6520, lists each frequency band and what its
best use is.
Most rules for the use of frequencies deal with VARIATIONS that are beyond human control. This is particularly true of medium- and high-frequency transmissions
using the SKY wave.
Make intelligent use of nomograms and tables.
One SURE rule-if you want to be reasonably certain
that a LONG-RANGE COMMUNICATION gets through, use
HIGH POWER and LOW FREQUENCY. That's what the international communication systems and most of your big
FOX stations use. However, this takes an antenna array
so large that it's not usable with shipboard transmission. So, to be certain a message for a distant point
gets through, RELAY IT-send it to the nearest large shore
Note in figure 156 how the SKY WAVE builds up to a
peak of daytime usefulness in the H.F. band. At night
the peak is in the top third of the M.F. band. Note also
how the usefulness of the GROUND, or surface, WAVE declines steadily as the higher frequencies are reached, until
it is altogether useless in H.F. But as the SPACE WAVE,
it becomes the only means of communication in V.L.F.
and for a certain range above V.L.F.
And be sure you remember that all SKY WAVE transmission-and that means almost all from 1,600 to 30,000
kc.-is associated with SKIP DISTANCES. In other words
you can get great range, but in the process you'll skip a
lot of receiving stations in between-possibly the one you
most want to receive your message.
THE NAVY FREQUENCY BAND
Most important to you in the chart is the shaded area
in the M.F. and H.F. bands-from 2,000 to 18,100 kc.
(2 to 18.1 mc.) . That, as you should already know, is the
standard band for NAVAL COMMUNICATIONS from SHIP-TO-SHIP and SHIP-TO-SHORE. It's the band you'll use most
frequently for TRANSMITTING messages, the one which
your standard transmitters, such as the TBK, TBL, and
It's right in the SHORT-WAVE area. Thus, it's SKY-WAVE TRANSMISSION and is affected by SKIP DISTANCES.
As the chart shows, when you want range in DAYTIME,
use the UPPER PORTION of the band-roughly from 3 mc.
to 18 mc. But for NIGHT communication, drop down below 3.5 mc. The three frequencies most commonly used
in this band are 2,716 kc. (2.716 mc.), 2,844 kc. (2.844
mc.), and 4,235 kc. (4.235 mc.)-the good old NERK series.
To help you in the use of this band, and to utilize properly knowledge of SKIP DISTANCE, the Navy publishes
NRPM's containing tables which show the best
Figure 156.-Recommended frequency chart.
frequencies within this band for communication with various
shore stations. These tables are issued QUARTERLY.
There will be a separate one for EACH major shore station.
They give the recommended frequency for every HOUR
of day for every distance 250 to 5,000 miles for some
stations. The DIRECTION of the receiving station from
your ship is also taken into account.
Look at the table in figure 156. It's a sample, but it's
for communication with Balboa during February 1945.
Your ship is 750 miles off the Pacific coast of Central
America during that February, the time is 1200 GCT,
and you wish to get a message to NBA, Balboa. Look at
your table for the proper time, then move over to the
third column-500 to 1,000 miles-in the second vertical
row of figures, since Balboa is east of you. The recommended frequency is 4 mc. Send your message.
These tables will be supplied to your ship in the form
of NRPM's and will cover three months, with a separate
table for each month and for each shore station.
As a further aid, you'll also be supplied with NOMOGRAMS-again in NRPM's. They cover three-month
periods, and each nomogram covers a range of 10° in
latitude. A nomogram may be used for any path where
the midpoint lies within the range of latitude of the
particular nomogram. It will give you the proper frequency for any time of day in Local Civil Time-LCT-for
any transmission of from 1 to 2,200 miles. With
proper use of nomograms and the frequency tables you
should be all set for communicating in the Navy H-F
Figure 157 shows a nomogram that is typical of the
series you'll use most frequently. To use one, first locate
approximately the midpoint of the transmission path on
the map shown on the last page of each published nomogram series. Determine the latitude, local time, and the
"zone" at this midpoint. (The zones are labeled E, W,
and I on the map to represent East, West and
Intermediate.) Then line up a ruler through the distance of
transmission (right-hand column) and the local time
(LCT) at the midpoint of the path, which is on the left-hand column. Where the straight edge intersects the frequency scale in the middle of the nomogram, you'll find
the recommended frequency.
For instance, you want to make a transmission of 800
nautical miles at 0400. You have consulted the map and
found that the midpoint of the path is at roughly 30°
north latitude and lies in the I (Intermediate) zone.
Local time at this point is approximately the same (0400).
Line up your straight edge on the nomogram in figure 157
between the Intermediate Zone MILES Line on the right
side and the Intermediate Zone TIME Line on the left.
Then look at your recommended frequency column in the
middle. You'll note that it is intersected right at 8 mc.
by the diagonal line made by the ruler. That's your
BUT THERE ARE OTHERS
The so-called Navy band is not the only one used. It's
the standard ship long-distance communications frequency-your chief TRANSMITTING frequency. But the
major FOX skeds are more generally broadcast way down
the line in the V.L.F. and L.F. bands. NGP's major sked,
for instance, is broadcast on 19.8 kc., and NSS's is on
18 kc. True, the big stations also broadcast FOX in the
M.F. and H.F. bands and some of the secondary skeds are
broadcast only in the higher frequencies. But if you
want to be sure to get that FOX, flip the receiver dial way,
Scooting up again into V.H.F. and U.H.F., you enter
your TACTICAL bands. When it's radio phone communication over the TBS or TDQ/RCK, go on up and start
playing around with the SPACE WAVE. And remember
your range limitations.
As a final tip on the proper use of frequencies, be sure
you know the proper PUBLICATIONS to use. Appendix I of
COMINST gives the big shore station circuits, the FOX
stations and the frequencies they use, the ship-shore
facilities provided by the shore organizations, and the
stations giving DF calibrating service on frequencies
ranging from 150 to 1,500 K.C., as well as those giving
Also, of course, there's the CONFIDENTIAL publication-The
U. S. Naval Radio Frequency Usage Plan-which lists
them all and what the Navy's currently using them for.
And there are the IRPL Radio Propagation Handbook
(DNC13-1), USF-70, current NRPM's and circular letters to turn to for further up-to-date frequency data when
needed. DNC-22 gives the dope on V.H.F. propagation. | http://hnsa.org/doc/radio/chap21.htm | 13 |
84 | Table of Contents
- Introduction to Tables
- Parts of a Table
- Tabulating Raw Data
- Tabular Presentation of Data
- Types of Tables
- Best Practice
Introduction to Tables
Tables are commonly used in collecting and organizing raw data during an experiment and also
for representing final data to be included in a paper or report. Most raw data are recorded
in tabular form in a spreadsheet, a lab notebook, or a lab manual; but once recorded, data
need to be reorganized, summarized, and reshaped into a final table or graph (see fig. 1).
In some cases, lab experiments will require sketches from observation and may or may not need
a table to go along with them. In most cases you'll be using tables to collect and then
organize your data. Since tables are so important for data management in the science
laboratory, you need to know the basics about designing a table for your data.
The representation of data in a table is formally referred to as “tabular presentation.”
Tabular presentation of data allows data to be organized for further analysis, allows large
amounts of raw data to be sorted and reorganized in a neat format, and allows the inclusion
of only the most important or relevant data. It also facilitates a dialogue between the text
and the exact numbers in your results, so that you don't have to describe all the specific
numerical values in your report. On the other hand, you should never put data in a table
if you can describe it efficiently in one or two sentences. In summary, tabular presentation
lets you place your results in an organized display of rows and columns that enable you to
group your data by different classifications so that you can make comparisons and better
understand your data.
When using a table as a final representation of data to communicate your results in your report,
list specific data values or draw comparisons between variables by listing subtotals, totals,
averages, percentages, frequencies, statistical results, etc. Tables are not the best choice
when you want to show a trend or relationship between variables. These are best represented
by graphs. Good tables should be easy to read across rows and down columns, easy to understand,
and easy to refer to in the text of your report. They should also include only relevant
data from your results.
Return to Top
Parts of a Table
Example 1 : Table with Labels
Title: The title provides a brief description of the contents of the table.
It should be concise and include the key elements shown in the table, for example,
groups, classifications, variables, etc. It should never be more than two lines.
Although, there are varying styles for writing a title, most titles should be
underlined or italicized, and the first letter of each word should be capitalized
following the rules for any title, or the entire title can be in caps. Periods are
left out at the end of the title. If the title is two lines long, the lines can
be either single-spaced or double-spaced depending on the style you're using.
Sometimes referred to as the table legend, a table's title should always go
above the table.
Table number: Tables should be numbered in the order that they are referred
to in your report, as Table 1, Table 2, and so on. The table number has a period at
the end and a space to separate it from the title, which normally follows the table number.
Headings & Subheadings: While data form the body of a table, headings and
subheadings allow you to establish an order to the data by identifying columns. They
should be written in the singular form unless they refer to groups, e.g., men, women,
etc., and the first letter of the first word should be capitalized. Headings should be
key words that best describe the columns beneath them. They should not be much longer
than the longest entry in their columns. Example:
- Column Headings: Each column has a heading in order to identify what
data are listed below in a vertical arrangement. When the column heading is above
the leftmost column, it is often referred to as the "stubhead" and the column is
the "stub column." This column usually lists the independent variable. The data
that follow the stub column are known as the "stub." All other column headings
are simply referred to as "column heads." Note that units should be specified
in column headings when applicable.
- Column Spanner: A heading that sits above two or more columns to indicate
a certain classification or grouping of the data in those columns. A column spanner
may also specify units, when appropriate.
Table Body: The actual data in a table occupying the columns, for example, percentages,
frequencies, statistical test results, means, "N" (number of samples), etc.
Table Spanner: A table spanner is located in the body of the table in order to divide
the data in a table without changing the columns. Spanners go the entire length of the table
and are often used to combine two tables into one in order to avoid repetition. A table spanner
may be written in the plural form.
Dividers: Dividers are lines that frame the top and bottom of the table and, or mark
the different parts of a table. They are often used for division or emphasis within the body of a table.
Table Notes: You may use table notes to explain anything in your table that is not
self-explanatory. While basic symbols and abbreviations like SD for standard deviation, N
for sample size, and % for percentage, are commonly used, you may have other technical
terms or other issues that you wish to explain. In these cases, you would place an asterisk
(*) for the first note you need after the specific data value. Then, you would place the
asterisk below the table followed by the note or explanation required for that value. Other
data values requiring notation would get two, three asterisks, or a stacked cross (‡) in
that order. Notes following these additional items would follow the first note using the
same format. Notes that apply to the table in general should be listed after the word
"Note: " under the table.
Return to Top
Tabulating Raw Data
When you are collecting data during a laboratory experiment, it is important that you record it
in tabular format in a spreadsheet, lab manual, lab notebook, or word processing software. Your
raw data table should include all the data you collected during an experiment as well as all
Whether the final version of your data representation is a table or a figure, this initial
tabulation will make it easy for you to read and interpret your data. The first thing a table
needs is a title. Make sure your title is descriptive of the data you are going to collect.
There should be a place for the date and the name of the recorder(s). In labs dealing with
multiple variables, the table may have both headings and subheadings. See above for a
description of the parts of a table.
Columns should be titled with the name of the variables followed by the units of measure
in parentheses. Extra columns should be made to allow room for observations, calculations,
and notes. Each extra column should be labeled appropriately. Usually, the independent
variable(s) is recorded in the first column, and the dependent variable(s) is recorded
in the subsequent column(s). The number of observations taken determines the number of rows.
Example 2 : Raw Data Spreadsheet
Return to Top
Tabular Presentation of Data
Once you are ready to include your results in your report, you may decide that the best
representation for your data is a tabular presentation. You may be working with raw data
that you collected yourself during an experiment, and you may want to revise it so that
it includes only relevant data and so that it is organized properly. You may also be working
with long lists of raw data collected by other people or given to you by your laboratory
instructor. In this case, it is up to you to sort through all the data and find a way to
best represent it in tabular form. To accomplish this task, you'll need to be familiar
with the basic rules for tabular presentation:
- Limit your table to data that are relevant to the hypotheses in the experiment.
- Be certain that your table can stand alone without any explanation.
- Make sure that your table is supplementary to your text and does not replicate
- Refer to all tables by numbers in your text, e.g., Table 1, 2, 3...
- Describe or discuss only the table's highlights in your text.
- Always give units of measurement in table headings.
- Align decimal places.
- Round numbers as much as possible. Try to round to two decimal places unless
more decimals are needed.
- Unless using a specific format style that requires that you place tables separately
at the end of the report, place the tables near the text that refers to them.
- Decide on a reasonable amount of data to be represented, not too little so that
the reader does not understand you results, but not too much so that the reader is
overwhelmed and confused.
- Only include the necessary number of tables in your paper, otherwise, it may be
redundant or confusing to the reader.
- Do not use tables if you only have two or fewer columns and rows. In such cases, a
textual description is enough.
- Organize your tables neatly so that the meaning of the table is obvious at first
glance. If the reader spends too much time deciphering your table, then it is too
complicated and not efficient.
- Remember that too many rows or columns could make it difficult for the reader to
understand the data. You may need to reduce the amount of data, or separate the data
into additional tables.
- If you have identical columns or rows of data in two or more tables, combine the
- Provide column/row totals or other numerical summaries that can make it easier to
understand the data.
- Be consistent with your tabular presentation. Use consistent table, title, and
Return to Top
Types of Tables
Textual (Word) Tables: Oftentimes, you may need tables that have textual data in the body.
Usually this is the case when you're dealing with qualitative data. These tables serve the same
function as any table--to make comparisons of items easy. These tables are also used when you
want to present examples, which may be grouped in a certain way, or when you want to show
categories of different items.
Example 3 : Text Table
Statistical Tables: These tables can present descriptive or inferential statistics
or both. Descriptive statistics are tabulations such as mean, standard deviation, mode,
range, or frequency. Inferential statistics refers to statistical tests. In such tables,
statistical test values are presented.
Example 4 : Statistical Table
Numerical Tables: These are the most common types of data, which typically represent
quantitative data, but sometimes may present a combination of quantitative and qualitative
data. As its name suggests, most of the body of the table consists of specific number values.
Example 5 : Numerical Table
Return to Top
Best practice in tabular presentation refers to designing tables that can be read easily and
quickly. The faster someone can read a table, the better it is. Remember these two words:
ease and speed! There are ways to accomplish this by manipulating contrast, alignment, spacing
and ordering. All these elements help to achieve clarity so that the reader can pick out
specific data and understand the discussion of your results.
Contrast: By making key elements of your table stand out from one another, you can
group or distinguish data from each other. For example, you could bold the title, dividers,
or headings. You can use different font sizes, styles, or letter cases for different elements
in your table. You can use color to emphasize backgrounds or text. Regardless of which of
these you choose for creating contrast, remember that "less is more" when it comes to creating
an effective table.
Alignment: Alignment is important for keeping your table neat and clear. For example,
all numbers in the columns should line up with each other and with their headings. Structure
your table so that all elements seem to be properly aligned with each other--titles, headings,
data, dividers, notes.
Ordering: Group items that are similar to give a sense of structure and meaning to your
table. This will also help break up the data, making it easier on the eye. Another way to order
data is to indent subordinate data when it falls below specific column data.
Spacing: Manipulating the "white areas" around the table can also help clarify and
organize the table. For example, you should always have enough space around and between text
so that it stands out. You can use space to separate groups or emphasize them.
Example 6 : Best Practice
This example shows the use of contrast to set the two types of forests apart. It
also uses bold-faced and varying font sizes to distinguish the column headings
from the table spanners. Spacing beneath the column headings creates additional
contrast. Using a border around the table also makes it stand out more and contains
the data nicely. Notice that all numbers are aligned by decimal place and that all
text is centered. The next example shows this same table without the use of "best practice."
Example 7 : Poor Practice
With all the gridlines, lack of contrast, and poor use of space and alignment, this table
is difficult to read.
Example 8 : Best Practice
This table makes use of appropriate groupings in order to break the list apart and make
it easy to follow. Good use of space and varying size and bold-faced fonts also create
a nice contrast and make it easy on the eyes. Below is the same table following poor practice.
Example 9 : Poor Practice
Notice that this table lacks grouping of any kind, making it difficult to sort through the list.
Title and header formatting is not consistent throughout, and the numerical data is centered instead
of left-aligned, making it difficult to compare values. There is no contrast or use of space, so this
table is a lot less easy on the eyes than the one above.
Return to Top
Birchman, J. S., A. (2003). Enhancing the appearance of information graphics.
Engineering Design Graphics
Journal, 67(1), 17.
Hall, R. O. (1943). Handbook of Tabular Presentation. New York: The Ronald Press
Nicol, A. A. M. P. & Pexman, M. P. (1999). Presenting Your Findings: A Practical Guide
for Creating Tables.
Washington, DC: American Psychological Association.
Wainer, H. (1992). Understanding graphs and tables. Educational Researcher, 21, 14-23.
Wright, P. (1977). Presenting technical information: A survey of research findings. | http://www.ncsu.edu/labwrite/res/gh/gh-tables.html | 13 |
55 | Linear Algebra/Topic: Linear Recurrences
In 1202 Leonardo of Pisa, also known as Fibonacci, posed this problem.
A certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive?
This moves past an elementary exponential growth model for population increase to include the fact that there is an initial period where newborns are not fertile. However, it retains other simplyfing assumptions, such as that there is no gestation period and no mortality.
The number of newborn pairs that will appear in the upcoming month is simply the number of pairs that were alive last month, since those will all be fertile, having been alive for two months. The number of pairs alive next month is the sum of the number alive current month and the number of newborns.
The is an example of a recurrence relation (it is called that because the values of are calculated by looking at other, prior, values of ). From it, we can easily answer Fibonacci's twelve-month question.
The sequence of numbers defined by the above equation (of which the first few are listed) is the Fibonacci sequence. The material of this chapter can be used to give a formula with which we can can calculate without having to first find , , etc.
For that, observe that the recurrence is a linear relationship and so we can give a suitable matrix formulation of it.
Then, where we write for the matrix and for the vector with components and , we have that . The advantage of this matrix formulation is that by diagonalizing we get a fast way to compute its powers: where we have , and the -th power of the diagonal matrix is the diagonal matrix whose entries that are the -th powers of the entries of .
The characteristic equation of is . The quadratic formula gives its roots as and . Diagonalizing gives this.
Introducing the vectors and taking the -th power, we have
We can compute from the second component of that equation.
Notice that is dominated by its first term because is less than one, so its powers go to zero. Although we have extended the elementary model of population growth by adding a delay period before the onset of fertility, we nonetheless still get an (asmyptotically) exponential function.
In general, a linear recurrence relation has the form
(it is also called a difference equation). This recurrence relation is homogeneous because there is no constant term; i.e, it can be put into the form . This is said to be a relation of order . The relation, along with the initial conditions , ..., completely determine a sequence. For instance, the Fibonacci relation is of order and it, along with the two initial conditions and , determines the Fibonacci sequence simply because we can compute any by first computing , , etc. In this Topic, we shall see how linear algebra can be used to solve linear recurrence relations.
First, we define the vector space in which we are working. Let be the set of functions from the natural numbers to the real numbers. (Below we shall have functions with domain , that is, without , but it is not an important distinction.)
Putting the initial conditions aside for a moment, for any recurrence, we can consider the subset of of solutions. For example, without initial conditions, in addition to the function given above, the Fibonacci relation is also solved by the function whose first few values are , , , and .
The subset is a subspace of . It is nonempty because the zero function is a solution. It is closed under addition since if and are solutions, then
And, it is closed under scalar multiplication since
We can give the dimension of . Consider this map from the set of functions to the set of vectors .
Problem 3 shows that this map is linear. Because, as noted above, any solution of the recurrence is uniquely determined by the initial conditions, this map is one-to-one and onto. Thus it is an isomorphism, and thus has dimension , the order of the recurrence.
So (again, without any initial conditions), we can describe the set of solutions of any linear homogeneous recurrence relation of degree by taking linear combinations of only linearly independent functions. It remains to produce those functions.
For that, we express the recurrence with a matrix equation.
In trying to find the characteristic function of the matrix, we can see the pattern in the case
Problem 4 shows that the characteristic equation is this.
We call that the polynomial "associated" with the recurrence relation. (We will be finding the roots of this polynomial and so we can drop the as irrelevant.)
If has no repeated roots then the matrix is diagonalizable and we can, in theory, get a formula for as in the Fibonacci case. But, because we know that the subspace of solutions has dimension , we do not need to do the diagonalization calculation, provided that we can exhibit linearly independent functions satisfying the relation.
Where , , ..., are the distinct roots, consider the functions through of powers of those roots. Problem 2 shows that each is a solution of the recurrence and that the of them form a linearly independent set. So, given the homogeneous linear recurrence (that is, ) we consider the associated equation . We find its roots , ..., , and if those roots are distinct then any solution of the relation has the form for . (The case of repeated roots is also easily done, but we won't cover it here— see any text on Discrete Mathematics.)
Now, given some initial conditions, so that we are interested in a particular solution, we can solve for , ..., . For instance, the polynomial associated with the Fibonacci relation is , whose roots are and so any solution of the Fibonacci equation has the form . Including the initial conditions for the cases and gives
which yields and , as was calculated above.
We close by considering the nonhomogeneous case, where the relation has the form for some nonzero . As in the first chapter of this book, only a small adjustment is needed to make the transition from the homogeneous case. This classic example illustrates.
In 1883, Edouard Lucas posed the following problem.
In the great temple at Benares, beneath the dome which marks the center of the world, rests a brass plate in which are fixed three diamond needles, each a cubit high and as thick as the body of a bee. On one of these needles, at the creation, God placed sixty four disks of pure gold, the largest disk resting on the brass plate, and the others getting smaller and smaller up to the top one. This is the Tower of Bramah. Day and night unceasingly the priests transfer the disks from one diamond needle to another according to the fixed and immutable laws of Bramah, which require that the priest on duty must not move more than one disk at a time and that he must place this disk on a needle so that there is no smaller disk below it. When the sixty-four disks shall have been thus transferred from the needle on which at the creation God placed them to one of the other needles, tower, temple, and Brahmins alike will crumble into dusk, and with a thunderclap the world will vanish.
How many disk moves will it take? Instead of tackling the sixty four disk problem right away, we will consider the problem for smaller numbers of disks, starting with three.
To begin, all three disks are on the same needle.
After moving the small disk to the far needle, the mid-sized disk to the middle needle, and then moving the small disk to the middle needle we have this.
Now we can move the big disk over. Then, to finish, we repeat the process of moving the smaller disks, this time so that they end up on the third needle, on top of the big disk.
So the thing to see is that to move the very largest disk, the bottom disk, at a minimum we must: first move the smaller disks to the middle needle, then move the big one, and then move all the smaller ones from the middle needle to the ending needle. Those three steps give us this recurence.
We can easily get the first few values of .
We recognize those as being simply one less than a power of two.
To derive this equation instead of just guessing at it, we write the original relation as , consider the homogeneous relation , get its associated polynomial , which obviously has the single, unique, root of , and conclude that functions satisfying the homogeneous relation take the form .
That's the homogeneous solution. Now we need a particular solution.
Because the nonhomogeneous relation is so simple, in a few minutes (or by remembering the table) we can spot the particular solution (there are other particular solutions, but this one is easily spotted). So we have that— without yet considering the initial condition— any solution of is the sum of the homogeneous solution and this particular solution: .
The initial condition now gives that , and we've gotten the formula that generates the table: the -disk Tower of Hanoi problem requires a minimum of moves.
Finding a particular solution in more complicated cases is, naturally, more complicated. A delightful and rewarding, but challenging, source on recurrence relations is (Graham, Knuth & Patashnik 1988)., For more on the Tower of Hanoi, (Ball 1962) or (Gardner 1957) are good starting points. So is (Hofstadter 1985). Some computer code for trying some recurrence relations follows the exercises.
- Problem 1
Solve each homogeneous linear recurrence relations.
- Problem 2
Give a formula for the relations of the prior exercise, with these initial conditions.
- , , .
- Problem 3
Check that the isomorphism given betwween and is a linear map. It is argued above that this map is one-to-one. What is its inverse?
- Problem 4
Show that the characteristic equation of the matrix is as stated, that is, is the polynomial associated with the relation. (Hint: expanding down the final column, and using induction will work.)
- Problem 5
Given a homogeneous linear recurrence relation , let , ..., be the roots of the associated polynomial.
- Prove that each function satisfies the recurrence (without initial conditions).
- Prove that no is .
- Prove that the set is linearly independent.
- Problem 6
(This refers to the value given in the computer code below.) Transferring one disk per second, how many years would it take the priests at the Tower of Hanoi to finish the job?
This code allows the generation of the first few values of a function defined by a recurrence and initial conditions. It is in the Scheme dialect of LISP (specifically, it was written for A. Jaffer's free scheme interpreter SCM, although it should run in any Scheme implementation).
First, the Tower of Hanoi code is a straightforward implementation of the recurrence.
(define (tower-of-hanoi-moves n)
(if (= n 1)
(+ (* (tower-of-hanoi-moves (- n 1))
1) ) )
(Note for readers unused to recursive code: to compute , the computer is told to compute , which requires, of course, computing . The computer puts the "times " and the "plus " aside for a moment to do that. It computes by using this same piece of code (that's what "recursive" means), and to do that is told to compute . This keeps up (the next step is to try to do while the other arithmetic is held in waiting), until, after steps, the computer tries to compute . It then returns , which now means that the computation of can proceed, etc., up until the original computation of finishes.)
The next routine calculates a table of the first few values. (Some language notes: '() is the empty list, that is, the empty sequence, and cons pushes something onto the start of a list. Note that, in the last line, the procedure proc is called on argument n.)
(define (first-few-outputs proc n)
(first-few-outputs-helper proc n '()) )
(define (first-few-outputs-aux proc n lst)
(if (< n 1)
(first-few-outputs-aux proc (- n 1) (cons (proc n) lst)) ) )
The session at the SCM prompt went like this.
>(first-few-outputs tower-of-hanoi-moves 64)
Evaluation took 120 mSec
(1 3 7 15 31 63 127 255 511 1023 2047 4095 8191 16383 32767
65535 131071 262143 524287 1048575 2097151 4194303 8388607
16777215 33554431 67108863 134217727 268435455 536870911
1073741823 2147483647 4294967295 8589934591 17179869183
34359738367 68719476735 137438953471 274877906943 549755813887
1099511627775 2199023255551 4398046511103 8796093022207
17592186044415 35184372088831 70368744177663 140737488355327
281474976710655 562949953421311 1125899906842623
2251799813685247 4503599627370495 9007199254740991
18014398509481983 36028797018963967 72057594037927935
144115188075855871 288230376151711743 576460752303423487
1152921504606846975 2305843009213693951 4611686018427387903
This is a list of through . (The mSec came on a 50 mHz '486 running in an XTerm of XWindow under Linux. The session was edited to put line breaks between numbers.)
- Ball, W.W. (1962), Mathematical Recreations and Essays, MacMillan (revised by H.S.M. Coxeter).
- De Parville (1884), La Nature, I, Paris, pp. 285-286 .
- Gardner, Martin (May. 1957), "Mathematical Games: About the remarkable similarity between the Icosian Game and the Tower of Hanoi", Scientific American: 150-154 .
- Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1988), Concrete Mathematics, Addison-Wesley .
- Hofstadter, Douglas R. (1985), Metamagical Themas:~Questing for the Essence of Mind and Pattern, Basic Books . | http://en.wikibooks.org/wiki/Linear_Algebra/Topic:_Linear_Recurrences | 13 |
155 | In geometry, the circumscribed circle or circumcircle of a polygon is a circle which passes through all the vertices of the polygon. The center of this circle is called the circumcenter and its radius is called the circumradius.
A polygon which has a circumscribed circle is called a cyclic polygon (sometimes a concyclic polygon, because the vertices are concyclic). All regular simple polygons, isosceles trapezoids, all triangles and all rectangles are cyclic.
A related notion is the one of a minimum bounding circle, which is the smallest circle that completely contains the polygon within it. Not every polygon has a circumscribed circle, as the vertices of a polygon do not need to all lie on a circle, but every polygon has a unique minimum bounding circle, which may be constructed by a linear time algorithm. Even if a polygon has a circumscribed circle, it may not coincide with its minimum bounding circle; for example, for an obtuse triangle, the minimum bounding circle has the longest side as diameter and does not pass through the opposite vertex.
All triangles are cyclic, i.e. every triangle has a circumscribed circle.[nb 1]
The circumcenter of a triangle can be found as the intersection of any two of the three perpendicular bisectors. (A perpendicular bisector is a line that forms a right angle with one of the triangle's sides and intersects that side at its midpoint.) This is because the circumcenter is equidistant from any pair of the triangle's vertices, and all points on the perpendicular bisectors are equidistant from two of the vertices of the triangle.
Alternate method to determine the circumcenter: draw any two lines departing the vertices at an angle with the common side, equal to 90 degrees minus the angle of the opposite vertex.
In coastal navigation, a triangle's circumcircle is sometimes used as a way of obtaining a position line using a sextant when no compass is available. The horizontal angle between two landmarks defines the circumcircle upon which the observer lies.
The circumcenter's position depends on the type of triangle:
- If and only if a triangle is acute (all angles smaller than a right angle), the circumcenter lies inside the triangle.
- If and only if it is obtuse (has one angle bigger than a right angle), the circumcenter lies outside the triangle.
- If and only if it is a right triangle, the circumcenter lies at the center of the hypotenuse. This is one form of Thales' theorem.
The diameter of the circumcircle can be computed as the length of any side of the triangle, divided by the sine of the opposite angle. (As a consequence of the law of sines, it does not matter which side is taken: the result will be the same.) The triangle's nine-point circle has half the diameter of the circumcircle. The diameter of the circumcircle of the triangle ΔABC is
where a, b, c are the lengths of the sides of the triangle and s = (a + b + c)/2 is the semiperimeter. The expression above is the area of the triangle, by Heron's formula. Trigometric expressions for the diameter of the circumcircle include:p.379
The useful minimum bounding circle of three points is defined either by the circumcircle (where three points are on the minimum bounding circle) or by the two points of the longest side of the triangle (where the two points define a diameter of the circle). It is common to confuse the minimum bounding circle with the circumcircle.
The circumcircle of three collinear points is the line on which the three points lie, often referred to as a circle of infinite radius. Nearly collinear points often lead to numerical instability in computation of the circumcircle.
Circumcircle equations
are the coordinates of points A, B, and C. The circumcircle is then the locus of points v = (vx,vy) in the Cartesian plane satisfying the equations
guaranteeing that the points A, B, C, and v are all the same distance r from the common center u of the circle. Using the polarization identity, these equations reduce to the condition that the matrix
Expanding by cofactor expansion, let
we then have a|v|2 − 2Sv − b = 0 and, assuming the three points were not in a line (otherwise the circumcircle is that line that can also be seen as a generalized circle with S at infinity), |v − S/a|2 = b/a + |S|2/a2, giving the circumcenter S/a and the circumradius √(b/a + |S|2/a2). A similar approach allows one to deduce the equation of the circumsphere of a tetrahedron.
Additionally, the circumcircle of a triangle embedded in d dimensions can be found using a generalized method. Let A, B, and C be d-dimensional points, which form the vertices of a triangle. We start by transposing the system to place C at the origin:
The circumradius, r, is then
where θ is the interior angle between a and b. The circumcenter, p0, is given by
This formula only works in three dimensions as the cross product is not defined in other dimensions, but it can be generalized to the other dimensions by replacing the cross products with following identities:
Circumcenter coordinates
Cartesian coordinates
The Cartesian coordinates of the circumcenter are
Without loss of generality this can be expressed in a simplified form after translation of the vertex A to the origin of the Cartesian coordinate systems, i.e., when A′ = A − A = (A′x,A′y) = (0,0). In this case, the coordinates of the vertices B′ = B − A and C′ = C − A represent the vectors from vertex A′ to these vertices. Observe that this trivial translation is possible for all triangles and the circumcenter coordinates of the triangle A′B′C′ follow as
Barycentric coordinates as a function of the side lengths
where a, b, c are edge lengths (BC, CA, AB respectively) of the triangle.
Barycentric coordinates from cross- and dot-products
In Euclidean space, there is a unique circle passing through any given three non-collinear points P1, P2, and P3. Using Cartesian coordinates to represent these points as spatial vectors, it is possible to use the dot product and cross product to calculate the radius and center of the circle. Let
Then the radius of the circle is given by
The center of the circle is given by the linear combination
Parametric equation of a triangle's circumcircle
Hence, given the radius, r, center, Pc, a point on the circle, P0 and a unit normal of the plane containing the circle, , one parametric equation of the circle starting from the point P0 and proceeding in a positively oriented (i.e., right-handed) sense about is the following:
The angles which the circumscribed circle forms with the sides of the triangle coincide with angles at which sides meet each other. The side opposite angle α meets the circle twice: once at each end; in each case at angle α (similarly for the other two angles). The alternate segment theorem states that the angle between the tangent and chord equals the angle in the alternate segment.
Triangle centers on the circumcircle of triangle ABC
In this section, the vertex angles are labeled A, B, C and all coordinates are trilinear coordinates:
- Steiner point = bc / (b2 − c2) : ca / (c2 − a2) : ab / (a2 − b2) = the nonvertex point of intersection of the circumcircle with the Steiner ellipse. (The Steiner ellipse, with center = centroid(ABC), is the ellipse of least area that passes through A, B, and C. An equation for this ellipse is 1/(ax) + 1/(by) + 1/(cz) = 0.)
- Tarry point = sec (A + ω) : sec (B + ω) : sec (C + ω) = antipode of the Steiner point
- Focus of the Kiepert parabola = csc (B − C) : csc (C − A) : csc (A − B).
Other properties
The circumcircle radius is no smaller than twice the incircle radius (Euler's triangle inequality).
The distance between the circumcenter and the incenter is where r is the incircle radius and R is the circumcircle radius.
The product of the incircle radius and the circumcircle radius of a triangle with sides a, b, and c is
Cyclic quadrilaterals
Quadrilaterals that can be circumscribed have particular properties including the fact that opposite angles are supplementary angles (adding up to 180° or π radians).
Cyclic n-gons
For a cyclic polygon with an odd number of sides, all angles are equal if and only if the polygon is regular. A cyclic polygon with an even number of sides has all angles equal if and only if the alternate sides are equal (that is, sides 1, 3, 5, ... are equal, and sides 2, 4, 6, ... are equal).
In any cyclic n-gon with even n, the sum of one set of alternate angles (the first, third, fifth, etc.) equals the sum of the other set of alternate angles. This can be proven by induction from the n=4 case, in each case replacing a side with three more sides and noting that these three new sides together with the old side form a quadrilateral which itself has this property; the alternate angles of the latter quadrilateral represent the additions to the alternate angle sums of the previous n-gon.
See also
- Inscribed circle
- Jung's theorem, an inequality relating the diameter of a point set to the radius of its minimum bounding circle
- Lester's theorem
- Circumscribed sphere
- Triangle center
- Japanese theorem for cyclic quadrilaterals
- Japanese theorem for cyclic polygons
- This can be proven on the grounds that the general equation for a circle with center (a, b) and radius r in the Cartesian coordinate system is
- Dörrie, Heinrich, 100 Great Problems of Elementary Mathematics, Dover, 1965.
- Wolfram page on barycentric coordinates
- Nelson, Roger, "Euler's triangle inequality via proof without words," Mathematics Magazine 81(1), February 2008, 58-61.
- Johnson, Roger A., Advanced Euclidean Geometry, Dover, 2007 (orig. 1929), p. 189, #298(d).
- De Villiers, Michael. "Equiangular cyclic and equilateral circumscribed polygons," Mathematical Gazette 95, March 2011, 102-107.
- Buchholz, Ralph H.; MacDougall, James A. (2008), "Cyclic polygons with rational sides and area", Journal of Number Theory 128 (1): 17–48, doi:10.1016/j.jnt.2007.05.005, MR 2382768.
- ^ Coxeter, H.S.M. (1969). "Chapter 1". Introduction to geometry. Wiley. pp. 12–13. ISBN 0-471-50458-0.
- ^ Megiddo, N. (1983). "Linear-time algorithms for linear programming in R3 and related problems". SIAM Journal on Computing 12 (4): 759–776. doi:10.1137/0212052.
- Kimberling, Clark (1998). "Triangle centers and central triangles". Congressus Numerantium 129: i–xxv, 1–295.
- ^ Pedoe, Dan (1988). Geometry: a comprehensive course. Dover.
- Derivation of formula for radius of circumcircle of triangle at Mathalino.com
- Semi-regular angle-gons and side-gons: respective generalizations of rectangles and rhombi at Dynamic Geometry Sketches, interactive dynamic geometry sketch.
- Weisstein, Eric W., "Circumcircle", MathWorld.
- Weisstein, Eric W., "Cyclic Polygon", MathWorld.
- Weisstein, Eric W., "Steiner circumellipse", MathWorld. | http://en.wikipedia.org/wiki/Circumcircle | 13 |
61 | This section describes physical, oceanographic, and climatic features in Puget Sound that may contribute to isolation between populations of the three gadiform species considered in this review.This section further provides a basis for identifying climatic and biological factors that may contribute to extinction risk for these species.The following summary primarily considers the marine waters north and west of Puget Sound that lie south of the boundary between Canada and the United States; however, because the three gadiform species are also found in the Strait of Georgia, a brief description the of this system will also be presented.Puget Sound is a fjord-like estuary located in northwest Washington state and covers an area of about 2,330 km2, including 3,700 km of coastline.It is subdivided into five basins or regions: 1) North Puget Sound, 2) Main Basin, 3) Whidbey Basin, 4) South Puget Sound, and 5) Hood Canal (Fig. 4)(Fig. 5) .The average depth of Puget Sound is 62.5 m at mean low tide, the average surface water temperature is 12.8oC in summer and 7.2oC in winter (Staubitz et al. 1997).Estuarine circulation in Puget Sound is driven by tides, gravitational forces, and freshwater inflows.For example, the average daily difference between high and low tide varies from 2.4 m at the northern end of Puget Sound to 4.6 m at its southern end.Tidal oscillations substantially reduce the flushing rate of nutrients and contaminants.Concentrations of nutrients (i.e., nitrates and phosphates) are consistently high throughout most of the Sound, largely due to the flux of oceanic water into the basin (Harrison et al. 1994).The freshwater inflow into Puget Sound is about 900 million gallons/day (gpd) (3.4 trillion liters /day).The major sources of freshwater are the Skagit and Snohomish Rivers located in Whidbey Basin (Table 1); however the annual amount of freshwater entering Puget Sound is only 10 to 20% of the amount entering the Strait of Georgia, primarily through the Fraser River.The Fraser River has a drainage area of 234,000 km2 (Bocking 1997).The rate of flow in the Fraser River ranges from an average of 750 m3/sec in the winter to an average of 11,500 m3/sec during the spring freshet, although, flows of 20,000 m3/sec are not uncommon during the spring floods (Bocking 1997).
Eight major habitats occur in Puget Sound; kelps beds and eelgrass meadows cover the largest area, almost 1000 km2.Other major habitats include subaerial and intertidal wetlands (176 km2), and mudflats and sandflats (246 km2).The extent of some of these habitats have markedly declined over the last century.Hutchinson (1988) indicated that overall losses since European settlement, by area, of intertidal habitat were 58% for Puget Sound in general and 18% for the Strait of Georgia.Four river deltas (the Duwamish, Lummi, Puyallup, and Samish) have lost greater than 92% of their intertidal marshes (Simenstad et al. 1982, Schmitt et al. 1994).At least 76% of the wetlands around Puget Sound have been eliminated, especially in urbanized estuaries.Substantial declines of mudflats and sandflats have also occurred in the deltas of these estuaries (Levings and Thom 1994).The human population in the Puget Sound region isestimated to be about 3.6 million.
Table 1.Mean annual streamflow of major Puget Sound streams (from Staubitz et al. 1997).Data converted from U.S. Customary to metric units.
|Gaging Station Name||Drainage area (km2)||Mean annual flow (m3/sec)||Mean annual runoff (cm)||Period of record (years)|
|Nooksack River at Ferndale||2,036||87.3||168||27|
|Samish River near Burlington||228||6.9||96||28|
|Skagit River near Mt. Vernon||8,011||469.9||185||53|
|N. F. Stillaguamish River at Arlington||679||53.5||249||65|
|Snohomish River near Monroe||3,981||270.1||214||30|
|Cedar River at Renton||477||18.9||125||48|
|Green River at Tukwila||1,140||42.2||117||27|
|Puyallup River at Puyallup||2,455||94.3||121||79|
|Nisqually River at McKenna||1,339||36.5||86||39|
|Deschutes River at Tumwater||420||9.3||70||6|
|Skokomish River near Potlatch||588||33.4||76||52|
|Dosewallips River near Brinnon||244||10.7||305||20|
|Dungeness River near Sequim||404||10.7||83||67|
|Elwha River near Port Angeles||697||42.5||192||83|
The Puget Sound Basin falls within the Puget Lowland, a portion of a low lying area extending from the lower Fraser River Valley southward to the Willamette Lowland (Burns 1985).In the distant past, the Puget Lowland was drained by numerous small rivers that flowed northward from the Cascade and Olympic mountains and emptied into an earlier configuration of the Strait of Juan de Fuca.During the Pleistocene, massive Piedmont glaciers, as much as 1,100 m thick, moved southward from the Coast Mountains of British Columbia and carved out the Strait of Juan de Fuca and Puget Sound.The deepest basins were created in northern Puget Sound in and around the San Juan Islands.About 15,000 years ago, the southern tongue of the last glacier receded rapidly leaving the lowland covered with glacial deposits and glacial lakes, and revealing the Puget Sound Basin (Burns 1985).The large glacially formed troughs of Puget Sound were initially occupied by large proglacial lakes that drained southward (Thorson 1980).Almost two dozen deltas were developed in these lakes as the result of streams flowing from the melting ice margins.
Considerable evidence indicates that climate in the Puget Sound region is cyclical, with maxima (warm, dry periods) and minima (cold, wet periods) occurring at decadal intervals.For example, according to the Pacific Northwest Index (PNI), since 1893 there have been about five minima and four maxima (Fig.6) (Ebbesmeyer and Strickland 1995).Three minima occurred between 1893 and 1920, one between the mid 1940s and 1960, and one between the mid 1960s and mid 1970s.Two maxima occurred between the early 1920s and the early 1940s, and two more occurred between the late 1970s and 1997.
Mantua et al. (1997) and Hare and Mantua (2000) evaluated relationships
between interdecadal climate variability and fluctuations in the abundance
and distribution of marine biota.These
authors used statistical methods to identify the Pacific Decadal Oscillation
(PDO).The PDO shows predominantly
positive epochs between 1925 and 1946 and following 1977, and a negative
epoch between 1947 and 1976 (Fig. 7).For
Washington State, positive epochs are characterized by increased flow of
relatively warm humid air and less than normal precipitation, and the negative
epochs correspond to a cool-wet climate.
Mantua et al. (1997) reported connections between the PDO and indicators of populations of Alaskan sockeye and pink salmon and Washington-Oregon-California coho and chinook salmon, although the coho and chinook populations were highest during the negative epochs.Hare and Mantua (2000) found evidence for major ecological and climate changes for the decade following 1977 (a positive epoch) (Fig. 8).They also found less powerful evidence of a climate regime shift (a negative epoch) following 1989, demonstrated primarily by ecological changes.Examples of ecological parameters that were correlated with these decadal changes included annual catches of Alaskan coho and sockeye salmon, annual catches of Washington and Oregon coho and chinook salmon, biomass of zooplankton in the California Current, and the Oyster Condition Index for oysters in Willapa Bay, Washington (Hare and Mantua 2000).
Few climatological records are available prior to the 1890s.Proxy measures of climatic variation have been used to reconstruct temperature fluctuations in the Pacific Northwest.Graumlich and Brubaker (1986) reported correlations between annual growth records for larch and hemlock trees located near Mt. Rainier and temperature and snow depth.A regression model was used to reconstruct temperatures from 1590 to 1913.Their major findings were that temperatures prior to 1900 were approximately 1oC lower than those of the 1900s, and that only the temperature pattern in the late 1600s resembled that of the 1900s.
The North Puget Sound region is demarcated to the north by the U.S.-Canadian border, to the west by a line due north of the Sekiu River, to the south by the Olympic Peninsula, and to the east by a line between Point Wilson (near Port Townsend) and Partridge Point on Whidbey Island and the mainland between Anacortes and Blaine, WA (Fig. 4).The predominant feature of the North Sound is the Strait of Juan de Fuca, which is 160 km long, and 22 km wide at its western end to over 40 km at its eastern end (Thomson 1994).
One of the deepest sections of this region is near the western mouth (about 200 m) (Holbrook et al. 1980), whereas the deepest sections of eastern portions are located northwest of the San Juan Islands (340-380 m) (PSWQA 1987).Subtidal depths range from 20 to 60 m in most of the northwest part of the region.Deeper areas near the entrance to the Main Basin north of Admiralty Inlet range from 120 to 180 m in depth (PSWQA 1987).
Most of the rocky-reef habitat in Puget Sound is located in this region.Pacunski and Palsson (1998) estimated that about 200 km² of rocky-reef habitat was present in this region, whereas only about 14 km² was found in the remaining Puget Sound basins.Several rockfish species, including copper and quillback rockfish prefer rocky-reef habitats (Pacunski and Palsson 1998).
The surface sediment of the Strait of Juan de Fuca is composed primarily of sand, which tends to be coarser, including some gravel, toward the eastern portion of North Sound and gradually becomes finer towards the mouth (Anderson 1968).Many of the bays and sounds in the eastern portion of the North Sound have subtidal surface sediments consisting of mud or mixtures of mud and sand (PSWQA 1987, WDOE 1998).The area just north of Admiralty Inlet is primarily gravel in its deeper portions, and a mixture of sand and gravel in its shallower portions, whereas the shallow areas north of the inlet on the western side of Whidbey Island and east of Protection Island consist of muddy-sand (Roberts 1979).The majority of the subtidal surface sediments among the San Juan Islands consist of mixtures of mud and sand.Within the intertidal zone, 61.2 ± 49.7% of the area also has mixed fine sediment and 22.6 ± 27.5% has sandy sediment (Bailey et al. 1998).
The Strait of Juan de Fuca is a weakly stratified, positive estuary with strong tidal currents (Thomson 1994).The western end of the Strait is strongly influenced by ocean processes, whereas the eastern end is influenced by intense tidal action occurring through and near the entrances to numerous narrow passages. Seasonal variability in temperature and salinity is small because the waters are vertically well mixed (Thomson 1994).On average, freshwater runoff makes up about 7% of the water by volume in the Strait and is derived primarily from the Fraser River.Generally, the circulation in the Strait consists of seaward surface flow of diluted seawater (< 30.0‰) in the upper layer and an inshoreflow of saline oceanic water (> 33.0‰) at depth (Thomson 1994, Collias et al. 1974).Exceptions include an easterly flow of surface waters near the shoreline between Port Angeles and Dungeness Spit, landward flows of surface waters in many of the embayments and passages, and flows of surface water southward toward the Main Basin near Admiralty Inlet (PSWQA 1987).
Temperatures generally range between 7o and 11oC, although occasionally surface temperatures reach as high as 14oC (WDOE 1999).In the eastern portion of North Sound, temperature and salinity vary from north to south, with the waters in the Strait of Georgia being slightly warmer than the waters near Admiralty Inlet.Waters near Admiralty Inlet also tended to have a higher salinities than waters to the north (WDOE 1999).Dissolved oxygen levels vary seasonally, with lowest levels of about 4 mg/L at depth during the summer months, and highest levels of about 8 mg/L near the surface.
Eelgrass is the primary vegetation in the intertidal areas of the Strait of Juan de Fuca, covering 42.2 ± 27.2% of the intertidal area (Fig. 9), and green algae is the second most common covering 4.4 ± 3.7% of the intertidal area (Bailey et al. 1998).About 45% of the shoreline of this region consists of kelp habitat, compared to only 11% of the shoreline of the four Puget Sound Basins (Shaffer 1998).Nevertheless, both areas each have approximately 50% of the total kelp resource.Most species of kelp are associated with shoreline exposed to wave action, whereas eelgrass is found in protected areas, such as Samish and Padilla Bays (Fig. 10).Some of the densest kelp beds in Puget Sound are found in the Strait of Juan de Fuca.Kelp beds at the north end of Protection Island declined drastically between 1989 and 1997, decreasing from about 181 acres to “nothing” (Sewell 1999).The cause of this decline is currently unknown.
The North Puget Sound Basin is bordered primarily by rural areas with a few localized industrial developments (PSWQA 1988).About 71% of the area draining into North Sound is forested, 6% is urbanized, and 15% is used for agriculture.This area, among the five Puget Sound basins, is used most heavily for agriculture.The main human population in this area centers around Port Angeles (1996 population census 19,200), Port Townsend (7,000), Anacortes (11,500), and Bellingham (58,300).About 10% of the total amount of wastes discharged from point-sources into Puget Sound comes from urban and industrial sources in this basin (PSWQA 1988).About 17% of the nutrients (in the form of inorganic nitrogen) entering Puget Sound originate from rivers carrying runoff from areas of agricultural and forest production (Embrey and Inkpen 1998).The Washington State Department of Natural Resources (WDNR 1998) estimated that 21% of the shoreline in this area has been modified by human activities.
The 75 km-long Main Basin is delimited to the north by a line between Point Wilson (near Port Townsend) and Partridge Point on Whidbey Island, to the south by Tacoma Narrows, and to the east by a line between Possession Point on Whidbey Island and Meadow Point (near Everett) (Fig. 4).The western portion of the Main Basin includes such water bodies as Sinclair and Dyes inlets, and Colvos and Dalco passages.Large embayments on the east side include Elliott and Commencement bays.
Among of the most important bathymetric features of the Main Basin are the sills at its northern and southern ends.The sill at the north end of Admiralty Inlet is 30 km wide and is 65 m deep at its shallowest point.The sill at Tacoma Narrows is 45 m deep (Burns 1985).South of Admiralty Inlet, depths generally range from 100 to 140 m in the central part of the basin, and 10 to 100 m in the waterways west of Bainbridge and Vashon islands.The central basin consists offive sub-basins:1) one near the southern end of Admiralty Inlet, west of Marrowstone Island, with depths to 190 m, 2) one near the southern tip of Whidbey Island with depths to 250 m, 3) one west of Port Madison, north of Seattle with depths to 290 m, and 4) one south of Seattle, near Point Pulley, with depths to about 250 m (Burns 1985).Elliott and Commencement bays, associated with Seattle and Tacoma, respectively, are relatively deep, with depths in excess of 150 m.Freshwater flows into Elliott Bay through the Duwamish-Green River System, and into Commencement Bay through the Puyallup River.
Subtidal surface sediments in Admiralty Inlet tend to consist largely of sand and gravel, whereas sediments just south of the inlet and southwest of Whidbey Island are primarily sand (PSWQA 1987).Sediments in the deeper areas of the central portion of the Main Basin generally consist of mud or sandy mud (PSWQA 1987, WDOE 1998).Sediments in the shallower and intertidal areas of the Main Basin are mixed mud, sand, and gravel.Bailey et al. (1998) reported that 92% of the intertidal area of the Main Basin consisted of mixed sand and gravel.A similar pattern is also found in the bays and inlets bordering this basin.
About 30% of the freshwater flow into the Main Basin is derived from the Skagit River.The Main Basin is generally stratified in the summer, due to river discharge and solar heating, and is often well mixed in the winter due to winter cooling and increased mixing by wind.Circulation in the central and northern sections of the Main Basin consists largely of outflow through Admiralty Inlet in the upper layer and inflow of marine waters at depth (below approximately 50 m) (Figs. 11A, 11B) (Strickland 1983, Thomson 1994).Oceanic waters from the Strait of Juan de Fuca flow over the northern sill at Admiralty Inlet into the Main Basin at about two-week intervals (Cannon 1983).In the southern section, currents generally flow northward along the west side of Vashon Island and southward on the east side through Colvos Passage.The sill at Tacoma Narrows also causes an upwelling process that reduces the seawater/freshwater stratification in this basin.With freshwater inflow, comes sediment deposits at an estimated rate of 0.18 to 1.2 grams/cm²/year (Staubitz et al. 1997).
Major circulation patterns in the Main Basin are greatly influenced by decadal climate regimes (Ebbesmeyer et al. 1998).During cool periods with strong oceanic upwellings and heavy precipitation, the strongest oceanic currents entering from the Strait of Juan de Fuca flow near mid-depth when the basin is cooler than 9.7oC.However, the strongest oceanic currents move toward the bottom of the basin, during warmer, dryer periods when waters are warmer than 9.7oC.
Water temperature, salinity, and concentration of dissolved oxygen in waters of the Main Basin are routinely measured by the WDOE at six sites (WDOE 1999).Subsurface temperatures are usually between 8o and 12oC; however, surface temperatures can reach 15 to 18oC in summer, and temperatures at depth can get as low as 7.5oC in winter.Salinities in the deeper portions of the Main Basin are generally about 30‰ in summer and fall, but decrease to about 29‰ during the rainier months.Surface waters are also usually about 29‰, but occasionallyhave salinities as low as 25-27‰ during the rainy season (WDOE 1999).
The mid-basin site had consistently higher temperatures and lower salinity values compared to the water quality parameters at the site near the northern entrance to Admiralty Inlet (WDOE 1999).To demonstrate this trend, values from near mid-basin at West Point in Seattle, considered to be representative of this basin, were compared to values from the northern end of Admiralty Inlet.Values measured on the same dates (a summer month and a winter month) and depths at each site for two different years (1993 and 1996) were compared.For the summer month, the mean temperature at mid-basin site was 12.25oC vs. 9.19oC for the entrance site.The mean salinities for this same month were 29.65‰ and 31.43‰, respectively.For the winter month, the mean temperature at mid basin site was 9.71EC and 8.11oC for the entrance site.The mean salinity values for this same month were 30.24‰ and 30.84‰, respectively.
Dissolved oxygen varies seasonally, with lowest levels of about 5.5 mg/L occurring at depth in summer months, and highest levelsof about 7.5 mg/L near the surface.Occasionally summer-time highs reach 13-14 mg/L at the surface. Figures 11A and 11B.
The Main Basin has a relatively small amount of intertidal vegetation, with 28.3 ± 10.4% of the intertidal area containing vegetation (Bailey et al. 1998).The predominant types are green algae (12.0 ± 4.4%) and eelgrass (11.4 ± 6.6%).Most eelgrass is located on the western shores of Whidbey Island and the eastern shores of the Kitsap Peninsula (Fig. 9) (PSWQA 1987).Although Figure 9 suggests a continuous distribution of eel grass on the eastern shores of the Main Basin, a recent report by the Puget Sound Water Quality Action Team (PSWQAT 2000) indicates that only 8% of the shoreline has a continuous distribution of eelgrass beds and 40% of the shoreline has a patchy distribution.
Areas bordering the Main Basin include the major urban and industrial areas of Puget Sound:Seattle, Tacoma, and Bremerton.Human population sizes for these cities are about 522,500, 182,900, and 44,000, respectively (1996 census).Approximately 70% of the drainage area in this basin is forested, 23% is urbanized, and 4% is used for agriculture (Staubitz et al. 1997).About 80% of the total amount of waste discharged from point-sources into Puget Sound comes from urban and industrial sources in this region (PSWQA 1988).Moreover, about 16% of the waste entering Puget Sound, overall, enters this basin through its major river systems, in the form of inorganic nitrogen (Embrey and Inkpen 1998).The Washington State DNR (1998) estimates that 52% of the shoreline in this area has been modified by human activities.
The Whidbey Basin includes the marine waters east of Whidbey Island and is delimited to the south by a line between Possession Point on Whidbey Island and Meadowdale, west of Everett.The northern boundary is Deception Pass at the northern tip of Whidbey Island (Fig. 4).The Skagit River (the largest single source of freshwater in Puget Sound) enters the northeastern corner of the Basin, forming a delta and the shallow waters (< 20 m) of Skagit Bay.Saratoga Passage, just south of Skagit Bay, separates Whidbey Island from Camano Island.This passage is 100 to 200 m deep, with the deepest section (200 m) located near Camano Head (Burns 1985).Port Susan is located east of Camano Island and receives freshwater from the Stillaguamish River at the northern end and from the Snohomish River (the second largest of Puget Sound’s rivers) at southeastern corner.Port Susan also contains a deep area (120 m) near Camano Head.The deepest section of the basin is located near its southern boundary in Possession Sound (220 m).
The most common sediment type in the intertidal zone of the Whidbey Basin is sand, representing 61.4 ± 65.5% of the intertidal area.Mixed fine sediments is the next most common sediment type covering 25.6 ± 18.9% of the intertidal area (Bailey et al. 1998).Similarly, subtidal areas near the mouths of the three major river systems are largely sand; however, the deeper areas of Port Susan, Port Gardner, and Saratoga Passage have surface sediments composed of mixtures of mud and sand (PSWQA 1987, WDOE 1998).Deception Pass sediments consist largely of gravel.
Although only a few water circulation studies have been performed in the Whidbey Basin, some general observations are possible.Current profiles in the northern portion of this basin are typical of a close-ended fjord.For example, currents during the summer tend to occur in the top 40 m, moving at low velocities in a northerly direction (Cannon 1983).Currents through Saratoga Passage tend to move at moderate rates in a southerly direction.Due to the influences of the Stillaguamish and Snohomish River systems, surface currents in Port Susan and Port Gardner tend to flow toward the Main Basin, although there is some evidence of a recirculating pattern in Port Susan (PSWQA 1987).
The waters in this basin are generally stratified, with surface waters being warmer in summer (generally 10-13oC) and cooler in winter (generally 7-10oC) (Collias et al. 1974, WDOE 1999).Salinities in the southern section of the Whidbey Basin in Possession Sound are similar to those of the Main Basin.In Port Susan and Saratoga Passage, salinities of surface waters (27.0-29.5‰) are generally lower than in the Main Basin, due to runoff from the two major rivers; moreover, after heavy rain these salinities range from 10-15‰.However, salinities in deeper areas often parallel those of the Main Basin (WDOE 1999).
Concentrations of dissolved oxygen in the waters of the Whidbey Basin are routinely measured by the WDOE in Saratoga Passage and in Port Gardner (WDOE 1999).Concentrations were highest in surface waters (up to 15 mg/L) and tended to be inversely proportional to salinity.Samples collected during spring run-off had the highest concentrations of dissolved oxygen.The lowest values (3.5 to 4.0 mg/L) were generally found at the greatest depths in fall.
Vegetation covers 23.6 ± 8.8% of the intertidal area of the Whidbey Basin (Bailey et al. 1998).The three predominant types of cover include green algae (6.8 ± 6.2%), eelgrass (6.5 ± 5.8%), and salt marsh (9.0 ± 9.4%).Eelgrass beds are most abundant in Skagit Bay and in the northern portion of Port Susan (Fig. 9) (PSWQA 1987).
Most of the Whidbey Basin is surrounded by rural areas with low human population densities.About 85% of the drainage area of this Basin is forested, 3% is urbanized, and 4% is in agricultural production.The primary urban and industrial center is Everett, with a population of 78,000.Most waste includes discharges from municipal and agricultural activities and from a paper mill.About 60% of the nutrients (as inorganic nitrogen) entering Puget Sound, enter through the Whidbey Basin by way of its three major river systems (Embrey and Inkpen 1998). The Washington State DNR (WDNR 1998) estimated that 36% of the shoreline in this area has been modified by human activities.
The Southern Basin includes all waterways south of Tacoma Narrows (Fig. 4).This basin is characterized by numerous islands and shallow (generally < 20 m) inlets with extensive shoreline areas.The mean depth of this basin is 37 m, and the deepest area (190 m) is located east of McNeil Island, just south of the sill (45 m) at Tacoma Narrows (Burns 1985).The largest river entering the basin is the Nisqually River which enters just south of Anderson Island.
A wide assortment of sediments are found in the intertidal areas of this basin (Bailey et al. 1998).The most common sediments and the percent of the intertidal area they cover are as follows:mud, 38.3 ± 29.3%; sand, 21.7 ± 23.9%; mixed fine, 22.9 ± 16.1%; and gravel, 11.1 ± 4.9%.Subtidal areas have a similar diversity of surface sediments, with shallower areas consisting of mixtures of mud and sand, and deeper areas consisting of mud (PSWQA 1987).Sediments in Tacoma Narrows and Dana Passage consists primarily of gravel and sand.
Currents in the Southern Basin are strongly influenced by tides, due largely to the shallowness of this area.Currents tend to be strongest in narrow channels (Burns 1985).In general, surface waters flow north and deeper waters flow south.Among the five most western inlets, Case, Budd, Eld, Totten, and Hammersley, the circulation patterns of Budd and Eld inlets are largely independent of those in Totten and Hammersley inlets due largely to the shallowness of Squaxin Passage (Ebbesmeyer et al. 1998).These current patterns are characterized by flows of high salinity waters from Budd and Eld inlets into the south end of Case Inlet, and from Totten and Hammersley inlets into the north end of Case Inlet.Flows of freshwater into the north and sound ends of Case Inlet originate from surface water runoff and the Nisqually River, respectively.
The major channels of the Southern Basin are moderately stratified compared to most other Puget Sound basins, because no major river systems flow into this basin.Salinities generally range from 27-29‰, and, although surface temperatures reach 14-15oC in summer, the temperatures of subsurface waters generally range from 10-13oC in summer and 8-10oC in winter (WDOE 1999).Dissolved oxygen levels generally range from 6.5 to 9.5 mg/L.Whereas salinities in the inlets tend to be similar to those of the major channels, temperatures and dissolved oxygen levels in the inlets are frequently much higher in summer.Two of the principal inlets, Carr and Case inlets, have surface salinities ranging from 28-30‰ in the inlet mouths and main bodies, but lower salinities ranging from 27-28‰ at the heads of the inlets (Collias et al. 1974).Summertime surface waters in Budd, Carr and Case Inlets commonly have temperatures that range from 15-19oC and dissolved oxygen values of 10-15 mg/L.Temperature of subsurface water tends to be elevated in the summer (14-15oC); however, temperatures are similar to those of the main channels in other seasons of the year (WDOE 1999).
Among the five basins of Puget Sound, the Southern Basin has the least amount of vegetation in its intertidal area (12.7 ± 15.5% coverage), with salt marsh (9.7 ± 14.7% coverage) and green algae (2.1 ± 1.9% coverage) being the most common types (Bailey et al. 1998).
About 85% of the area draining into this basin is forested, 4% is urbanized, and 7% is in agricultural production.The major urban areas around the South Sound Basin are found in the western portions of Pierce County.These communities include west Tacoma, University Place, Steilacoom, and Fircrest, with a combined population of about 100,000.Other urban centers in the South Sound Basin include Olympia with a population of 41,000 and Shelton with a population of 7,200 (PSRC 1998).Important point sources of wastes include sewage treatment facilities in these cities and a paper mill in Steilacoom.Furthermore, about 5% of the nutrients (as inorganic nitrogen) entering Puget Sound, enter into this basin through non-point sources (Embrey and Inkpen 1998).The Washington State DNR (WDNR 1998) estimated that 34% of the shoreline in this area has been modified by human activities.
Hood Canal branches off the northwest part of the Main Basin near Admiralty Inlet and is the smallest of the Puget Sound basins, being 90 km long and 1-2 km wide (Fig. 4).Like many of the other basins, it is partially isolated by a sill (50 m deep) near its entrance that limits the transport of deep marine waters in and out of Hood Canal (Burns 1985).The major components of this basin consist of its Entrance, Dabob Bay, the central region, and The Great Bend at the southern end.Dabob Bay and the central region are the deepest sub basins (200 and 180 m, respectively), whereas other areas are relatively shallow, < 40 m for The Great Bend and 50-100 m at the entrance (Collias et al. 1974).
Sediment in the intertidal zone consists mostly of mud (53.4 ± 89.3% of the intertidal area), with similar amounts of mixed fine sediment and sand (18.0 ± 18.5% and 16.7 ± 13.7%, respectively) (Bailey et al. 1998).Surface sediments in the subtidal areas also consist primarily of mud, with the exception of the entrance, which consists of mixed sand and mud, and The Great Bend and Lynch Cove, which have patchy distributions of sand, gravelly sand, and mud (PSWQA 1987, WDOE 1998).
Aside from tidal currents, currents in Hood Canal are slow, perhaps because the basin is a closed-ended fjord without large-volume rivers.The strongest currents tend to occur near the entrance and generally involve a northerly flow of surface waters.
Water temperature, salinity, and concentration of dissolved oxygen in Hood Canal are routinely measured by the WDOE at two sites, near The Great Bend and near the Entrance (WDOE 1999).Salinities generally range from 29-31‰ and tend to be similar at both sites.In contrast, temperature and dissolved oxygen values are often markedly different between the two sites.Values measured on the same dates (a summer month and a winter month) and at the same depths at each site for 1993 and 1996 demonstrate these differences.Mean temperature in the summer month at The Great Bend site was 9.9oC, but 12.1oC at the Entrance site.Mean dissolved oxygen values for this same month were 3.24 mg/L and 6.67 mg/L at the Great Bend and Entrance sites, respectively.For the winter month, the mean temperature at The Great Bend site was 10.6oC, but 9.1oC for the Entrance site.Mean dissolved oxygen for this same month were 4.22 mg/L and 6.78 mg/L at the Great Bend and Entrance sites, respectively.
Vegetation covers 27.8 ± 22.3% of the intertidal areas of the Hood Canal Basin.Salt marsh (18.0 ± 8.8%) and eelgrass (5.4 ± 6.3%) are the two most abundant plants (Bailey et al. 1998).Eelgrass is found in most of Hood Canal, especially in the Great Bend and Dabob Bay (Fig. 9).
The Hood Canal Basin is one of the least developed areas in Puget Sound and lacks large centers of urban and industrial development.About 90% of the drainage area in this basin is forested (the highest percentage of forested areas of the five Puget Sound basins), 2% is urbanized, and 1% is in agricultural production (Staubitz et al. 1997).However, the shoreline is well developed with summer homes and year-around residences (PSWQA 1988).A small amount of waste is generated by forestry practices and agriculture.Nutrients (as inorganic nitrogen) from non-point sources in this basin represent only 3% of the total flowing into Puget Sound annually (Embrey and Inkpen 1998).The Washington State DNR (WDNR 1998) estimated that 33% of the shoreline in this area has been modified by human activities.
Algal productivity in the open waters of the central basin of Puget Sound is dominated by intense blooms of microalgae beginning in late April or May and recurring through the summer.Annual primary productivity in the central basin of the Sound is about 465 g C/m2.This high productivity is due to intensive upward transport of nitrate by the estuarine mechanism and tidal mixing.Chlorophyll concentrations rarely exceed 15 Fg/L.Frequently, there is more chlorophyll below the photic zone than within it.Winter et al. (1975) concluded that phytoplankton growth was limited by a combination of factors, including vertical advection and turbulence, light, sinking and occasional rapid horizontal advection of the phytoplankton from the area by sustained winds.Summer winds from the northwest would be expected to transport phytoplankton to the south end of the Sound which could exacerbate the anthropogenic effects that are already evident in some of these inlets and bays (Harrison et al. 1994).
The abundance and distribution of zooplankton in Puget Sound is not well understood.A few field surveys have been conducted in selected inlets and waterways, but reports on Sound-wide surveys are lacking.In general, the most numerically abundant zooplankton throughout the Puget Sound region are the calanoid copepods, especially Pseudocalanus spp. (Giles and Cordell 1998, Dumbauld 1985, Chester et al. 1980, Ohman 1990).Giles and Cordell (1998) reported that crustaceans (primarily calanoid copepods) were most abundant in Budd Inlet in South Puget Sound, although larvae of larvaceans, cnidarians, and polychaetes in varying numbers were also abundant during the year.Likewise, in a study conducted by Dumbauld (1985) at two locations in the Main Basin (a site near downtown Seattle and a cluster of sites in the East Passage near Seattle) covering a variety of depths from 12 to 220 m, he found that calanoid copepods and cyclopoid copepods, and two species of larvaceans were dominant numerically.Dominant copepods at deeper sites were Pseudocalanus spp. and Corycaeus anglicus.The larvacean, Oikopleura dioica, was also relatively common at the shallow sites.Similarly, the most abundant zooplankton in the Strait of Juan de Fuca were reported by Chester et al. (1980) to be calanoid copepods, including Pseudocalanus spp. and Acartia longiremis, and the cyclopoid copepod, Oithona similis.
It is likely that zooplankton assemblages vary both seasonally and annually.Evidence of depth-specific differences was reported by Ohman (1990).In studies conducted in Dabob Bay near Hood Canal, he compared the abundance of certain zooplankton species at a shallow and deep site.He found one species of copepod (Pseudocalanus newmani) was common at both sites, whereas species (e.g., Euchaeta elongata and Euphausia pacifica) that prey upon P. newmani were abundant at the deep site, but virtually absent from the shallow site.An example of seasonal variability was reported by Bollens et al. (1992b).In Dabob Bay, E. pacifica larvae were abundant in the spring and absent in the winter, and juveniles and adults were most abundant in the summer and early fall, with their numbers declining in the winter (Bollens et al. 1992b).
A few Sound-wide surveys of abundance and distribution of benthic invertebrates have been performed (Lie 1974, Llansó et al. 1998).A common finding among these surveys is that certain species prefer specific sediment types.For example, in areas with predominantly sandy sediments, among the most common species are Axinopsida serricata (a bivalve) and Prionospio jubata (a polychaete); in muddy, clayey areas of mean to average depth, Amphiodia urtica-periercta (a echinoderm) and Eudorella pacifica (a cumacean) are among the most common species; in areas with mixed mud and sand, Axinopsida serricata and Aphelochaeta sp. (a polychaete) are commonly found; and lastly, in deep muddy, clayey areas, predominant species tend to be Macoma carlottensis (a bivalve) and Pectinaria californiensis (a polychaete).In general, areas with sandy sediments tend to have the most species (Llansó et al. 1998), but the lowest biomass (Lie 1974).Areas with mixed sediments tend to have the highest biomass (Lie 1974).
As with zooplankton, assemblages of benthic invertebrates vary both seasonally and annually.Lie (1968) reported seasonal variations in the abundance of species, with the maxima taking place during July-August, and the minima occurring in January to February.However, there were no significant variations in the number of species during different seasons.Annual variation was examined by Nichols (1988) at three Puget Sound sites in the Main Basin: two deep sites (200-250 m) and one shallow site (35 m).For one of the deep sites, he reported that M. carlottensis generally dominated the benthic community from 1963 through the mid-1970s.Subsequently, these species were largely replaced by A. serricata, E. pacifica, P. californensis, Ampharete acutifrons (a polychaete), and Euphiomedes producta (an ostracod).A similar dominance by P. californensis and A. acutifrons was reported for the other deep site over approximately the same time period.
Several macroinvertebrate species are widely distributed in Puget Sound.Among the crustacean species, Dungeness crab (Cancer magister) and several species of shrimp [e.g., sidestripe (Pandalopsis dispar) and pink (Pandalus borealis)] are the most commonly harvested species (Bourne and Chew 1994).The non-indigenous Pacific oyster (Crassostrea gigas) accounts for approximately 90% of the landings of bivalves.Other abundant bivalves are the Pacific littleneck clam (Protothaca staminea), Pacific geoduck (Panopea abrupta), Pacific gaper (Tresus nuttalii), and the non-indigenous Japanese littleneck clam (Tapes philippinarum) and softshell clam (Mya arenaria) (Kozloff 1987, Turgeon et al. 1988).
The most common Pacific salmon species utilizing Puget Sound during some portion of their life cycle include chinook (Oncorhynchus tshawytscha), coho (O. kisutch), chum (O. keta), pink (O. gorbuscha), and sockeye salmon (O. nerka).Anadromous steelhead (O. mykiss) and cutthroat trout (O. clarki clarki) also utilize Puget Sound habitats.
Palsson et al. (1997) identified about 221 species of fish in Puget Sound.The marine species are generally categorized as bottomfish, forage fish, non-game fishes, and other groundfish species.In addition to Pacific hake, Pacific cod, and walleye pollock, other important commercial marine fish species in Puget Sound are Pacific herring, spiny dogfish (Squalus acanthias), lingcod (Ophiodon elongatus), various rockfish species (Sebastes spp.), and English sole (Pleuronectes vetulus).English sole are thought to be relatively healthy in the central portions of Puget Sound; however, significant declines have been recorded in localized embayments, such as Bellingham Bay and Discovery Bay.Other species of bottomfish species found throughout Puget Sound include skates (Raja rhina and R. binoculata), spotted ratfish (Hydrolagus cooliei), sablefish (Anoplopoma fimbria), greenlings (Hexagrammos decagrammus and H. stelleri), sculpins [e.g., cabezon (Scorpaenichthys marmoratus), Pacific staghorn sculpin (Leptocottus armatus), and roughback sculpin (Chitonotus pugetensis)], surfperches [e.g., pile perch (Rhacochilus vacca) and striped seaperch (Embiotoca lateralis)], wolf-eel (Anarrhichthys ocellatus), Pacific sanddab (Citharichthys sordidus), butter sole (Pleuronectes isolepis), rock sole (Pleuronectes bilineatus), Dover sole (Microstomus pacificus), starry flounder (Platichthys stellatus), sand sole (Psettichthys melanostictus), and over one dozen rockfish species [e.g., brown rockfish (Sebastes auriculatus), copper rockfish (S. caurinus), greenstriped rockfish (S. elongatus), yellowtail rockfish (S. flavidus), quillback rockfish (S. maliger), black rockfish, (S. melanops) and yelloweye rockfish (S. ruberrimus)] (DeLacy et al. 1972, Robins et al. 1991). Additional fish species that are less known, but widely distributed in Puget Sound, include surf smelt (Hypomesus pretiosus), plainfin midshipman (Porichthys notatus), eelpouts [e.g., blackbelly eelpout (Lycodopsis pacifica)], pricklebacks [e.g., snake prickleback, (Lumpenus sagitta)], gunnels [e.g., penpoint gunnel (Apodichthys flavidus)], Pacific sand lance (Ammodytes hexapterus), bay goby (Lepidogobius lepidus), and poachers [e.g., sturgeon poacher (Podothecus acipenserinus)] (DeLacy et al. 1972, Robins et al. 1991).
About 66,000 marine birds breed in or near Puget Sound.About 70% of them breed on Protection Island, located just outside of the northern entrance to the Sound.The most abundant species are rhinoceros auklet (Cerorhinca monocerata), glaucous_winged gull (Larus glaucescens), pigeon guillemot (Cepphus columba), cormorants (Phalacrocorax spp.), marbled murrelet (Brachyramphus marmoratus), and the Canada goose (Branta canadensis).Examples of less abundant species include common murre (Uria aalge) and tufted puffins (Fratercula cirrhata).
Populations of rhinoceros auklet and pigeon guillemot appear to be stable, whereas populations of glaucous_winged gull have increased slightly in recent years, especially in urban areas (Mahaffy et al. 1994).Accurate estimates of current populations of marbled murrelet and the Canada goose are not available, but the population of marbled murrelet has been greatly reduced and this species has been listed as threatened.Thirty years ago, year_around resident Canada geese were rare, but current anecdotal evidence from observations in waterfront parks suggests that their population is growing rapidly.The common murre and tufted puffin populations have declined drastically during the last two decades.
Nine primary marine mammal species occur in Puget Sound including (listed in order of abundance):harbor seal (Phoca vitulina), California sea lion (Zalophus californianus), Steller sea lion (Eumetopias jubatus), Northern elephant seal (Mirounga angustirostris), harbor porpoise (Phocoena phocoena), Dall's porpoise (Phocoenoides dalli), killer whale (Orcinus orca), gray whale (Eschrichtius robustus), and minke whale (Balaenoptera acutorostrata).Harbor seals are year_round residents, and their abundance has been increasing in Puget Sound by 5 to 15% annually at most sites (Calambokidis and Baird 1994).
California sea lions, primarily males, reside in Puget Sound between late summer and late spring, and spend the remainder of the year at their breeding grounds in southern California and Baja California.Sea lion populations are growing at approximately 5% annually.Populations of the remaining species are quite low in Puget Sound.Steller sea lions and elephant seals are transitory residents, whereas the Steller sea lion is currently listed as threatened in the U.S., the elephant seal is abundant in the eastern North Pacific but has few haul_out areas in Puget Sound.Although harbor porpoises are also abundant in the eastern North Pacific and were common in Puget Sound 50 or more years ago, they are now rarely seen in the Sound (Calambokidis and Baird 1994).Low numbers of Dall's porpoise are observed in Puget Sound throughout the year, but little is known about their population size—they are also abundant in the North Pacific.A pod of resident fish_feeding killer whales, numbering about 100, resides just north of the entrance to Puget Sound, and the size of this group is increasing about 2.0% each year.Minke whales are also primarily observed in this same northern area, but their population size is unknown.Gray whales migrate past the Georgia Basin en route to or from their feeding or breeding grounds; a few of them enter Puget Sound during the spring through fall to feed.
Marine groundfish fishery statistics, including those from Puget Sound, are typically reported by geographically delimited fishery management regions.Major groundfish statistical areas as established by the Pacific Marine Fisheries Commission (PFMC) for the west coast of the lower 48 states and British Columbia are illustrated in Figure 12.Puget Sound constitutes Area 4A in the PFMC designation.Minor Statistical Areas (MSA) in the Strait of Georgia and the Strait of Juan de Fuca used by the Canadian Department of Fisheries and Oceans for fishery statistical purposes are illustrated in Figure 13.The Washington Department of Fish and Wildlife reports groundfish statistics in the U.S portion of the Strait of Georgia and in Puget Sound by Marine Fish Management Regions as illustrated in Figure 14.
The Georgia Basin is an international waterbody that encompasses the marine waters of Puget Sound, the Strait of Georgia, and the Strait of Juan de Fuca (Fig. 15).The coastal drainage of the Georgia Basin is bounded to the west and south by the Olympic and Vancouver Island mountains and to the north and east by the Cascade and Coast mountains.At sea level, the Basin has a mild maritime climate and is dryer than other parts of the coast due to the rain shadow of the Olympic and Vancouver Island mountains.At sea level, air temperatures range from 0o to 5oC in January and 12o to 22oC in July, and winds are typically channeled by the local topography and blow along longitudinal axes of the straits and sounds.Winds are predominantly from the southeast in winter and the northwest in summer.
The Strait of Georgia (Fig. 15) has a mean depth of 156 m (420 m maximum) and is bounded by narrow passages (Johnstone Strait and Cordero Channel to the north and Haro and Rosario straits to the south) and shallow submerged sills (minimum depth of 68 m to the north and 90 m to the south).The Strait of Georgia covers an area of approximately 6,800 km2 (Thomson 1994) and is approximately 220 km long and varies from 18.5 to 55 km in width (Tully and Dodimead 1957, Waldichuck 1957).Both southern and northern approaches to the Strait of Georgia are through a maze of islands and channels, the San Juan and Gulf islands to the south and a series of islands to the north that extend for 240 km to Queen Charlotte Strait (Tully and Dodimead 1957).Both northern channels (Johnstone Strait and Cordero Channel) are from 1.5 to 3 km wide and are effectively two-way tidal falls, in which currents of 12-15 knots occur at peak flood (Tully and Dodimead 1957).However, both lateral and vertical constriction of water flow at the narrowest points in these northern channels are even more severe.Constrictions occur at Arran Rapids, Yuculta Rapids, Okisollo Channel, and to a lesser degree at Seymour Narrows (0.74 km wide, minimum depth of 90 m) in Discovery Passage (Waldichuck 1957).Overall, these narrow northern channels have only about 7% of the cross-sectional area as do the combined southern entrances into the Strait of Georgia (Waldichuck 1957).
Freshwater inflows are dominated by the Fraser River, which accounts for roughly 80% of the freshwater entering the Strait of Georgia.Fraser River run-off and that of other large rivers on the mainland side of the Strait are driven by snow and glacier melt and their peak discharge period is generally in June and July.Rivers that drain into the Strait of Georgia off Vancouver Island (such as the Chemainus, Cowichan, Campbell, and Puntledge rivers) peak during periods of intense precipitation, generally in November (Waldichuck 1957).
Circulation in the Strait of Georgia occurs in a general counter-clockwise direction (Waldichuck 1957).Tides, winds, and freshwater run-off are the primary forces for mixing, water exchange, and circulation.Tidal flow enters the Strait of Georgia predominantly from the south creating vigorous mixing in the narrow, shallow straits and passes of the Strait of Georgia.The upper, brackish water layer in the Strait of Georgia is influenced by large freshwater run-off and salinity in this layer varies from 5 to 25‰.Deep, high-salinity (33.5 to 34‰), oceanic water enters the Strait of Georgia from the Strait of Juan de Fuca.The surface outflowing and deep inflowing water layers mix in the vicinity of the sills, creating the deep bottom layer in the Strait of Georgia, where salinity is maintained at about 31‰ (Waldichuck 1957).The basic circulation pattern in the summer is the southerly outflow of relatively warm, low salinity surface, with the northerly inflow of high salinity oceanic water from the Strait of Juan de Fuca at the lowest depths.In the winter, cool, low salinity near surface water mixes with the intermediate depth high salinity waters; however, oceanic inflow is generally confined to the intermediate depths.Crean et al. (1988) reported that “the freshwater discharge finds primary egress through the southern boundary openings into the Strait of Juan de Fuca” and that subsurface waters (5 to 20 m below the region of the Fraser River discharge) also have “a predominantly southerly flow.”Since surface water run-off peaks near the time of peak salinity of inflowing source water, the salinity of the deepwater in the Strait of Georgia undergoes only a small seasonal change in salinity (Waldichuck 1957).
Ekman (1953), Hedgpeth (1957), and Briggs (1974) summarized the distribution patterns of coastal marine fishes and invertebrates and defined major worldwide marine zoogeographic zones or provinces.Along the coastline of the boreal Eastern Pacific, which extends roughly from Point Conception, California to the Eastern Bering Sea, numerous schemes have been proposed for grouping the faunas into zones or provinces.A number of authors (Ekman 1953, Hedgpeth 1957, Briggs 1974, Allen and Smith 1988) have recognized a zoogeographic zone within the lower boreal Eastern Pacific that has been termed the Oregonian Province.Another zone in the upper boreal Eastern Pacific has been termed the Aleutian Province (Briggs 1974).However, exact boundaries of zoogeographic provinces in the Eastern boreal Pacific are in dispute (Allen and Smith 1988).Briggs (1974) and Allen and Smith (1988) reviewed previous literature from a variety of taxa and from fishes, respectively, and found the coastal region from Puget Sound to Sitka, Alaska to be a “gray zone” or transition zone that could be classified as part of either of two provinces:Aleutian or Oregonian (see Fig. 16).The southern boundary of the Oregonian Province is generally recognized as Point Conception, California and the northern boundary of the Aleutian Province is similarly recognized as Nunivak in the Bering Sea or the Aleutian Islands (Allen and Smith 1988).
Briggs (1974) placed the boundary between the Oregonian and Aleutian Provinces at Dixon Entrance, based on the well-studied distribution of mollusks, but indicated that distributions of fishes, echinoderms, and marine algae gave evidence for placement of this boundary in the vicinity of Sitka, Alaska.Briggs (1974) placed strong emphasis on the distribution of littoral mollusks (due to the more thorough treatment this group has received) in placing a major faunal break at Dixon Entrance.The authoritative work by Valentine (1966) on distribution of marine mollusks of the northeastern Pacific shelf showed that the Oregonian molluscan assemblage extended to Dixon Entrance with the Aleutian fauna extending northward from that area.Valentine (1966) erected the term Columbian Sub-Province to define the zone from Puget Sound to Dixon Entrance.
Several lines of evidence suggest that an important zoogeographic break for marine fishes occurs in the vicinity of Southeast Alaska.Peden and Wilson (1976) investigated the distributions of inshore fishes in British Columbia, and found Dixon Entrance to be of minor importance as a barrier to fish distribution.A more likely boundary between these fish faunas was variously suggested to occur near Sitka, Alaska, off northern Vancouver Island, or off Cape Flattery, Washington (Peden and Wilson 1976, Allen and Smith 1988).Chen (1971, as cited in Briggs 1974) stated that of the more than 50 or more rockfish species belonging to the genus Sebastes occurring in northern California, more than two-thirds do not extend north of British Columbia or Southeast Alaska.Briggs (1974) further stated that “about 50 percent of the entire shore fish fauna of western Canada does not extend north of the Alaskan Panhandle.”In addition, many marine fish species common to the Bering Sea, extend southward into the Gulf of Alaska but apparently occur no further south (Briggs 1974).Allen and Smith (1988, p. 144) stated that “the relative abundance of some geographically-displacing [marine fish] species suggest that the boundary between these provinces [Aleutian and Oregonian] occurs off northern Vancouver Island.”
Back to Table of Contents | http://www.nwfsc.noaa.gov/publications/techmemos/tm44/environment.htm | 13 |
56 | Each cerebral hemisphere is divided into
four lobes; the frontal,
parietal, temporal, and the
The Frontal Lobe is
the most anterior lobe of the brain. Its posterior boundary
is the fissure of Rolando, or central sulcus,
which separates it from the parietal lobe. Inferiorly, it is
divided from the temporal lobe by the fissure of Sylvius which is also called the lateral fissure.
This lobe deals with with
higher level cognitive functions like reasoning and
judgment. Sometimes called executive function, it is associated with the pre-frontal cortex. Most importantly the
frontal lobe contains several cortical areas involved in the
control of voluntary muscle movement, including those
necessary for the production of speech and swallowing.
Broca's Area is found on the inferior third frontal gyrus in
the hemisphere that is dominant for language. This area is
involved in the coordination or programming of motor
movements for the production of speech sounds. While it is
essential for the execution of the motor movements involved
in speech it does not directly cause movement to occur. The
firing of neurons here does not generate impulses for motor
movement instead it generates motor
programming pattern. This motor plan is sent to upper motor neurons in the precentral gyrus (motor strip) which in turn send the signals to to the lower motor neurons (cranial and spinal nerves) which take the signals to muscle end plates.
Broca's area is also involved in syntax
which involves the ordering of words, and morphology-the allomorphs at the ends of words e.g., hat+s=hats.
Injuries to Broca's area may cause
apraxia or Broca's
The precentral gyrus,
which may also be called the primary motor area or, most
commonly, the motor strip is immediately anterior to the
central sulcus. It controls the voluntary movements of
skeletal muscles; cell bodies of the pyramidal tract
are found on this gyrus.
The amount of tissue on the precentral
gyrus that is dedicated to the innervation of a particular
part of the body is proportional to the amount of motor
control needed by that area, not just its size. For example,
much more of the motor strip is dedicated to the control of
speech (tongue, lips, jaw, velum, pharynx, and larynx) than to the trunk.
The premotor area or
area is immediately anterior
to the motor strip. It is responsible for the programming
for motor movements. It does not, however program the motor
commands for speech as these are generated in Broca's area
which is also located in the frontal lobe.
The most anterior part of the frontal
lobe is involved in complex cognitive processes like
reasoning and judgment. Collectively, these processes may be
intelligence. A component of
biological intelligence is executive function. According to Denckla, 1996, executive function
regulates and directs cognitive processes, decision making,
problem solving, learning, reasoning and strategic thinking. Some characteristics
of right hemisphere syndrome may be considered problems of the
executive function. They include left side neglect where
there is a lack of awareness of the left side of the body.
The Parietal Lobe is
immediately posterior to the central sulcus. It is anterior
to the occipital lobe, from which it is not separated by any
natural boundary. Its inferior boundary is the posterior
portion of the lateral fissure which divides it from the
The parietal lobe is
associated with sensation, including the sense of touch,
kinesthesia, perception of warmth and cold, and of
vibration. It is also involved in writing and in some
aspects of reading.
The postcentral gyrus
which is also called the primary sensory area or the sensory strip is immediately posterior to
the central sulcus. This area receives sensory feedback from
joints and tendons in the body and is organized in the same
manner as the motor strip.
Like the motor strip, the sensory strip
continues down into the longitudinal cerebral fissure and so
has both a lateral and a medial aspect.
areas are located behind the
postcentral gyrus. These areas are capable of more detailed
discrimination and analysis than is the primary sensory
area. They might, for example, be involved in sensing
how hot or cold something is rather than simply
identifying it as hot or cold. Information is first
processed in the primary sensory area and is then sent to
the secondary sensory areas.
The angular gyrus lies
near the superior edge of the temporal lobe, immediately
posterior to the supramarginal gyrus. It is involved in the
recognition of visual symbols. Geschwind
described this area as "the most important cortical areas for
speech and language" and the "association cortex of
association cortices." He also claims that the angular gyrus
is not found in non-human species.
Fibers of many different types travel
through the angular gyrus, including axons associated with
hearing, vision, and meaning. The arcuate fasciculus, the
group of fibers connecting Broca's area in the frontal lobe to Wernicke's area
in the temporal lobe also connects to the angular gyrus.
The following disorders may result from
damage to the angular gyrus in the hemisphere that is
dominant for speech and language: anomia, alexia with
agraphia, left-right disorientation, finger agnosia, and
Anomia according to Webb, Adler, and Love, 2008 is the loss of power to name objects and people. It is difficulty with word-finding or naming.
Someone suffering just from anomia can list the functions of an
object and explain it meaning, but cannot recall its name.
agraphia refers to
difficulties with reading and writing.
disorientation is an
inability to distinguish right from left.
agnosia is the lack of
sensory perceptual ability to identify which finger is which.
Acalcula refers to difficulties with arithmetic.
The Temporal Lobe is
inferior to the lateral fissure and anterior to the
occipital lobe. It is separated from the occipital lobe by
an imaginary line rather than by any natural boundary.
The temporal lobe is
associated with auditory processing and olfaction. It is
also involved in semantics, or word meaning. Wernicke's
area is located there on the first temporal gyrus.
Area located on the
posterior portion of the superior temporal gyrus in the
hemisphere that is dominant for language. This area plays a
critical role in the ability to understand and produce
meaningful speech. A lesion here will case Wernicke's
Gyrus, is the area in the temporal where sound first reaches the brain. It is is also known
as the anterior transverse temporal gyrus ,and is the
There are two secondary auditory
or auditory association
areas which make important
contributions to the comprehension of speech. They are part of Wernicke's area..
The Occipital Lobe,
which is the most posterior lobe has no natural boundaries on its lateral aspect.
It is involved in vision.
The primary visual area receives input from the optic tract via the
The secondary visual areas integrate visual information, giving meaning to
what is seen by relating the current stimulus to past
experiences and knowledge. A lot of memory is stored here.
These areas are superior to the primary visual cortex.
Damage to the primary visual area causes
blind spots in the visual field, or total blindness,
depending on the extent of the injury. Damage to the
secondary visual areas could cause visual agnosia.
People with this condition can see visual stimuli, but
cannot associate them with any meaning or identify their
function. This represents a problem with meaning, as
compared to anomia, which involves a problem with naming, or
Insula is a cortical area
which lies below the fissure of Sylvius and is considered by
some anatomists to be the fifth lobe of the cerebrum. It can
only be seen by splitting the lateral fissure. Little is
known about the connections of this area, but it may be
linked to the viscera. Drunkers, 1996 feels that it may be
involved in programming for speech for speech sounds. Since her study had only an N of six further research is important.
It is important to remember that while
some functions can be localized to very specific parts of
the brain, others cannot be classified in this way because
many areas are involved in their performance. Word-finding,
for example, is associated with several different areas.
Also, we cannot say that all higher level cognitive
functioning is associated with the frontal lobe; the
processing of word meaning carried out by Wernicke's
area certainly involves a sophisticated type of cognition. Also,
right hemisphere lesions often result in
The cortex is about four
millimeters thick and is composed of six layers. Listed from
most superior to most inferior, these layers are; the
the internal granular
layer, the ganglionic layer,
and the fusiform or
The molecular layer is
the most superior layer of the cortex. It contains the cell
bodies of neuroglial cells.
The external granular layer is very dense and contains small granular cells
and small pyramidal cells.
The medial pyramidal
layer contains pyramidal
cells arranged in row formation. The cell bodies of some
association fibers are found here.
The internal granular layer is thin, but its cell structure is the same as
that of the external granular layer.
The ganglionic layer
contains small granular cells, large pyramidal cells as well
as the cell bodies of some association fibers. The
association fibers that originate here form two large
tracts: The Bands of
Baillarger and Kaes Bechterew.
The fusiform layer is
also known as the multiform
layer; its axons enter white
matter. Its function is unknown.
All layers are present in all parts of
the cortex. However, they do not have the same relative
density in all areas. Depending upon the function of a
particular area, some of these layers will be thicker than
others in that location.
The cortex wraps around the brain,
covering its inferior surface and lining the gap between the
right and left cerebral hemispheres, which is called the
The part of the cortex covering the sides
of the hemispheres is called the lateral cortex while the
part covering the sides of the hemispheres that lie within
the longitudinal cerebral fissure is called
Studies done by Brodmann in the early
part of the twentieth century generated a map of the cortex
covering the lobes of each hemisphere. These studies
involved electrical probing of the cortices of epileptic
patients during surgery. Brodmann numbered the areas that he
studied in each lobe and recorded the psychological and
behavioral events that accompanied their stimulation.
The Frontal Lobe
contains areas that Brodmann identified as involved in
cognitive functioning and in speech and language.
4 corresponds to the
precentral gyrus or primary motor area.
Area 6 is the premotor or supplemental motor area.
Area 8 is anterior of the premotor cortex. It
facilitates eye movements and is involved in visual reflexes
as well as pupil dilation and constriction.
Areas 9, 10, and 11 are
anterior to area 8. They are involved in cognitive processes
like reasoning and judgment which may be collectively
called biological intelligence including executive function..
Areas 44 and 45 are Broca's area.
Areas in the Parietal Lobe play
a role in somatosensory processes.
3, 2, and 1 are located on the primary sensory strip, with
area 3 being above the other two. These are
somasthetic areas, meaning that they are the primary
sensory areas for touch and proprioception including kinesthesia.
7, and 40 are found
posterior to the primary sensory strip and are considered
presensory association areas where somatosensory processing occurs.
Area 39 is the angular gyrus.
Areas involved in the processing of
auditory information and semantics as well as the
appreciation of smell are found in the Temporal Lobe.
41 is Heschl's gyrus, the
primary auditory area.
Area 42 immediately inferior to area 41 and is also
involved in the detection and recognition of speech. The
processing done in this area of the cortex provides a more
detailed analysis than that done in area 41.
Areas 21 and 22 are the
auditory association areas. Both areas are divided into two
parts; one half of each area lies on either side of area 42. Collectively they can be called Wernicke's area.
Area 37 is found on the posterior-inferior part of the
temporal lobe. Lesions here can cause anomia.
The Occipital Lobe
contains areas that process visual stimuli.
17 is the primary visual
Areas 18 and 19 are the
secondary visual (association) areas where visual processing occurs.
A pedagogical device called the
literally means "little man," is often used to explain the
organization of the motor strip and to demonstrate that
specific areas of this gyrus are responsible for sending
commands to specific parts of the body. The body is
represented on the motor strip in an upside-down fashion.
The lower parts of the body, like the feet and the legs,
receive motor movement commands from the superior part of
the precentral gyrus (motor strip). Parts of the face, on the other hand
are innervated by the inferior part of the motor strip. Although this homunculus is without a body it was easier for me to do it as shown.
The motor strip extends down
some distance into the longitudinal cerebral fissure. The
portion inside this fissure is its medial aspect. The
part on the lateral surface of the hemisphere is called its
aspect. The medial cortex
controls the movements of the body from the hips on down
while the lateral aspect sends commands to the upper body
including the larynx, face, hands, shoulders, and trunk.
The medial and lateral aspects of the
motor strip have different blood supplies. Blood comes to
the medial area from the anterior cerebral artery while the lateral cortex is supplied by the | http://www.csuchico.edu/~pmccaffrey/syllabi/CMSD%20320/362unit4.html | 13 |
69 | In the early 1900s, the electric power generation industry was experiencing rapid growth and change. The steam engines used for power in the previous century had been displaced by turbines which generated electricity as they were rotated by pressurized steam generated in boilers. Turbines and boilers were operating at higher temperatures and pressures (and also in increasingly complex cycles, which required more sophisticated thermal design and analysis) in order to attain greater thermodynamic efficiency. They were also becoming larger as the demand for electricity skyrocketed. A major source of growing pains for the industry was the lack of accurate and standardized values for the properties of water and steam. For the design of power plants and the boilers and turbines within them, it is necessary to have accurate values of thermodynamic quantities such as the vapor pressure (pressure at which water boils at a given temperature) and the enthalpy of vaporization or latent heat (amount of heat required to generate steam from liquid water). More important, the evaluation of the performance of purchased equipment depends on the calculation of these properties. The efficiency of a turbine is measured as the fraction of the energy available in the steam that is converted to electricity, but that available energy is calculated to be a different number depending on the values used for the thermodynamic properties. A turbine might appear to be 28 % efficient with one set of properties and only 27 % efficient with another set; because of the large flows involved, these small differences could mean large sums of money. It therefore became imperative to settle on internationally standardized values for the properties of water and steam, so that all parties in the industry could have a “level playing field” on which to compare bids and equipment performance.
Steam is the technical term for the invisible water vapor, the gaseous phase of water formed when water is boiled, In common language it is used to refer to the visible mist of water droplets formed when 'water vapor condenses in the presence of cooler air. At lower pressures, such as in the upper atmosphere or on a high mountain, water boils at a lower temperature than the nominal 100 °C (212 °F) at standard temperature and pressure.
Superheated steam is steam at a temperature above its boiling point for the given pressure. The Enthalpy of vaporization is the energy required to turn water into water vapour during a process which increases the volume by 1,600 times at standard temperature and pressure. Steam engines convert this difference in volume into mechanical work were important to the Industrial Revolution; more recently steam turbine have become popular for the generation electricity.
2.1 Types of steam
A gas can only contain a certain amount of steam (the quantity varies with temperature and pressure). When a gas has absorbed its maximum amount it is said to be in vapor-liquid equilibrium and if more water is added it is described as 'wet steam'.
Superheated steam is steam at a temperature higher than its boiling point for the pressure which only occurs where all the water has evaporated or, in the case of steam generators (boilers), the saturated steam has be conveyed out of the steam drum.
Steam tables contain thermodynamic data for water/steam and are often used by engineers and scientists in design and operation of equipment where thermodynamic cycles involving steam are used.
2.2 Water and Steam
Consider the heating of water at constant pressure. If various properties are to be measured, an experiment can be set up where water is heated in a vertical cylinder closed by a piston on which there is a weight. The weight acting down under gravity
on a piston of fixed size ensures that the fluid in the cylinder is always subject to the
same pressure. Initially the cylinder contains only water at ambient temperature. As
this is heated the water changes into steam and certain characteristics may be
Initially the water at ambient temperature is subcooled. As heat is added its
temperature rises steadily until it reaches the saturation temperature corresponding
with the pressure in the cylinder. The volume of the water hardly changes during
this process. At this point the water is saturated. As more heat is added, steam is
generated and the volume increases dramatically since the steam occupies a
greater space than the water from which it was generated. The temperature however remains the same until all the water has been converted into steam. At this point the steam is saturated. As additional heat is added, the temperature of the steam increases but at a faster rate than when the water only was being heated. The volume of the steam also increases. Steam at temperatures above the saturation temperature is superheated.
If the temperature T is plotted against the heat added q the three regions namely subcooled water, saturated mixture and superheated steam are clearly indicated. The slope of the graph in both the subcooled region and the superheated region depends on the specific heat of the water and steam respectively.
cp = q / ΔT
The slope however is temperature rise )T over heat added q. This is the inverse of specific heat cp.
Slope = 1 / cp
Since heat added at constant pressure is equal to the enthalpy change this plot is really a temperature-enthalpy diagram. As has already been demonstrated, a temperature-entropy diagram is useful is showing thermodynamic cycles. The temperature-enthalpy diagram may be converted into a temperature-entropy diagram by using the two relations:
cp = q / ΔT
Δs = q / T
Combining these gives:
cp ΔT = T Δs
ΔT / Δs = T / cp
The ratio of change in temperature over change in entropy ΔT/Δs is the slope of the graph on a temperature-entropy diagram. If cp is constant in one or other region of the plot the slope is proportional to temperature T and will increase as the temperature rises. The area under the curve represents the heat added q up to any point or between any points.
This plot shows just one line of a temperature-entropy chart. If the experiment is repeated under different conditions, families of lines can be developed to obtain a complete chart.
2.3 Temperature-Entropy Chart
Consider the heating of water at different pressures each time maintaining the selected pressure constant. A series of similar lines will be obtained with those at higher pressures being above those at lower pressures. As pressure increases however the amount of latent heat added to completely evaporate the water decreases. This is because, at higher pressure, since the increase in volume from
liquid to vapour is not as great, less energy is required to expand the fluid to its new condition. Eventually, at very high pressures, the density of the steam becomes equal to that of the water and no latent heat is required to expand the fluid. If the points at which the water and steam respectively become saturated are joined up a saturated water line and a saturated steam line are formed. These join at the critical point where steam and water densities are equal to form the characteristic bell shaped curve.
The sub cooled water region is to the left and the superheated steam region to the right of the bell curve. The saturated water-steam mixture region lies under or within the bell.
Within the saturated water-steam mixture region there are intermediate conditions. When only part of the total latent heat to evaporate the water has been added a unique point X on the particular constant pressure line is reached. At this point the mass fraction of vapour is x and the mass fraction of liquid is (1 - x). Each fraction has associated with it either the enthalpy of the water at saturation conditions hf or the enthalpy of the steam at saturation conditions hg. The total enthalpy of the mixture is therefore:
h = x hg + (1 - x) hf
h = hf + x (hg - hf)
h = hf + x hfg
The value hfg is equivalent to the latent heat required to convert the water into steam. Similar formulae may be derived for internal energy u and entropy s
u = uf + x ufg
s = sf + x sfg
If all these unique points X for a given mass fraction of vapour under different pressures are joined, a line of constant mass fraction or steam quality is obtained. For other unique mass fractions, other lines of steam quality can be drawn to create a whole family of lines. Note that these lines all meet at the critical point.
Another important family of lines is that showing constant enthalpy conditions. The change in enthalpy h is equal to the heat added q under constant pressure conditions. If given amounts of heat are added from an arbitrary zero condition for different pressure conditions, this heat q will be represented by the area under the respective constant pressure lines. These areas must all be equal for a given amount of heat added and thus a given change in enthalpy. Joining up the points on each constant pressure line at which the given amount of heat has been added will produce a line of constant enthalpy. Adding different amounts of heat will produce a family of constant enthalpy lines. Note that these have a steep slope in the saturated region but a lesser slope in the superheated region.
2.4 Fluid Properties
The following properties for liquids and gases may be determined by experiment and are plotted on thermodynamic diagrams:
Specific volume v
Internal energy u
Pressure and temperature can be measured directly. Specific volume can be obtained by measurement of the physical size of the container. Enthalpy can be obtained by measurement of the amount of heat added at constant pressure. Internal energy can be calculated from the formula for the definition of enthalpy:
h = u + p v
Entropy can be calculated from its formula in terms of temperature:
s = cp ln (T / To)
Temperature To is an arbitrary base temperature (273°K for water) and specific heat cp may be obtained from the formula:
cp = q / ΔT
cp = Δh / ΔT
In the saturated water-steam mixture region the change in entropy is obtained as follows:
Δs = q / T
Δs = hfg / Saturation
All relevant parameters may thus be obtained and plotted as families of curves on a temperature-entropy diagram. It is not always sufficiently accurate to read values from such a diagram. To overcome this problem the calculated values which would be plotted are instead presented in tabular form in a set of thermodynamic tables. These have high accuracy but, since only discrete values in a continuum are presented, interpolation is often necessary to obtain the desired values.
2.5 Thermodynamic Equations
Certain steam and water properties can be determined by experiment and others subsequently by calculation from basic formulae already given. Steam does however follow to some degree the gas laws, that is, as pressure increases specific volume decreases and as temperature increases specific volume increases. Experimental determination of the properties allows the deviation from the gas laws to be ascertained. Thus using a combination of the gas laws, the equations already derived and experimental results it is possible to develop suitable semi-empirical equations which will allow the properties of water and steam to be computed. Such equations are used for developing steam tables where each tabulated value is calculated. These equations are usually polynomials with several constants. The more complex the polynomial the more accurate the results and often as many as six constants are used in the equation. Equations of this type are also used in computer routines to find required properties.
THERMODYNAMIC PROPERTIES OF WATER AND STEAM FOR POWER GENERATION
It describes the accurate measurements carried out at NBS that were essential in reaching agreement on the needed standards. In the United States, this problem was first addressed in 1921 by a group of scientists and engineers brought together by the American Society of Mechanical Engineers (ASME). The 1921 meeting led to the formation of the ASME Research Committee on Thermal Properties of Steam. This committee, recognizing the need for reliable data, collected subscriptions from industry and disbursed the money to support experimental measurement of key properties of water and steam at Harvard, MIT, and the Bureau of Standards. Because the ASME committee was not as successful in their fundraising as they had hoped, all three institutions ended up subsidizing some of the research themselves in recognition of its importance. The need for standard, reliable data was also recognized in other countries (notably England, Germany, and Czechoslovakia), and research efforts were coordinated internationally.
In the late 1920s and early 1930s, three international conferences were held with the purpose of agreeing on standardized values for the properties of water and steam. This culminated in 1934 with the adoption of a standard set of tables, covering the range of temperatures and pressures of interest to the power industry at that time. These tables gave the vapor pressure as a function of temperature, values of the volume and enthalpy for the equilibrium vapor and liquid phases along the vapor-pressure curve, and volumes and enthalpies at points on a coarse grid of temperatures and pressures. Each value had an uncertainty estimate assigned to it. The data those tables were based on also became the basis for a book of “Steam Tables” produced by J. H. Keenan and F. G. Keyes the Keenan and Keyes tables were the de facto standard for the design and evaluation of steam power generation equipment worldwide for the next 30 years. The most important data behind these new steam tables came from the laboratory of Nathan S. Osborne at the National Bureau of Standards. Through most of the 1920s and 1930s, Osborne and his coworkers painstakingly built equipment and conducted measurements. The ASME had originally hoped for data within three years of the project’s 1921 start, but fortunately they were patient (and grateful to the NBS for subsidizing the work) and continued to support the project through years of pioneering, but often frustrating, apparatus development. Finally, beginning in the late 1920s, their patience was rewarded as data of unparalleled quality began coming from Osborne’s laboratory. The primary experimental technique was calorimetry, in which a measured amount of heat is added to a fluid under controlled conditions. Osborne and coworkers had previously performed calorimetric measurements on ammonia. For measurements on water, several new calorimeters were developed. One of these, constructed from copper and used for the region below 100 _C, has been preserved in the NIST museum; it is shown in Fig. 1.
The region of most industrial importance, however, was at much higher temperatures (and correspondingly higher pressures), well beyond what had been encountered in the ammonia work. Experiments at these conditions were also more difficult because water is very corrosive at high temperatures. We briefly describe the calorimeter that was built to overcome these difficulties and that was used to take the data reported in the Osborne, Stimson, and Ginning paper.
Fig. 1. Calorimeter used by Osborne et al. to study water properties at temperatures
The heart of the calorimeter was a heavy-walled 325 cm3 vessel of chromium-nickel steel. The contents were not stirred to achieve thermal equilibrium; instead, heat was diffused by 30 internal silver fins. The vessel contained a heater and carried a miniature platinum resistance thermometer. The calorimeter was shielded from the environment by two concentric silver shields that were maintained at the calorimeter temperature at all times. The calorimeter had two valves. The valve at the top allowed a measurable amount of vapor to be extracted, and the bottom valve allowed extraction of a known amount of liquid water. The water simultaneously served as a pressure transfer medium to allow measurement of the saturation pressure. During extraction of either vapor or liquid, the remaining liquid would partially evaporate, and heat was supplied to the calorimeter in order to keep the temperature constant.
BOILERS & THERMIC FLUID HEATERS
This section briefly describes the Boiler and various auxiliaries in the Boiler Room. A boiler is an enclosed vessel that provides a means for combustion heat to be transferred to water until it becomes heated water or steam. The hot water or steam under pressure is then usable for transferring the heat to a process. Water is a useful and inexpensive medium for transferring heat to a process. When water at atmospheric pressure is boiled into steam its volume increases about 1,600 times, producing a force that is almost as explosive as gunpowder. This causes the boiler to be an equipment that must be treated with utmost care. The boiler system comprises of: a feed water system, steam system and fuel system. The feed water system provides water to the boiler and regulates it automatically to meet the steam demand. Various valves provide access for maintenance and repair. The steam system collects and controls the steam produced in the boiler. Steam is directed through a piping system to the point of use. Throughout the system, steam pressure is regulated using valves and checked with steam pressure gauges. The fuel system includes all equipment used to provide fuel to generate the necessary heat. The equipment required in the fuel system depends on the type of fuel used in the system. The water supplied to the boiler that is converted into steam is called feed water. The two sources of feed water are:
· Condensate or condensed steam returned from the processes
· Makeup water (treated raw water) which must come from outside the boiler room and plant processes. For higher boiler efficiencies, an economizer preheats the feed water using the waste heat in the flue gas.
TYPE OF BOILERS
This section describes the various types of boilers: Fire tube boiler, Water tube boiler, Packaged boiler, Fluidized bed combustion boiler, Stoker fired boiler, Pulverized fuel boiler, Waste heat boiler and Thermic fluid heater.
5.1 Fire Tube Boiler
In a fire tube boiler, hot gases pass through the tubes and boiler feed water in the shell side is converted into steam. Fire tube boilers are generally used for relatively small steam capacities and low to medium steam pressures. As a guideline, fire tube boilers are competitive for steam rates up to 12,000 kg/hour and pressures up to 18 kg/cm2. Fire tube boilers are available for operation with oil, gas or solid fuels. For economic reasons, most fire tube boilers are of “packaged” construction (i.e. manufacturer erected) for all fuels.
Figure 2. Sectional view of a Fire Tube Boiler
(Light Rail Transit Association
5.2 Water Tube Boiler
In a water tube boiler, boiler feed water flows through the tubes and enters the boiler drum. The circulated water is heated by the combustion gases and converted into steam at the vapour space in the drum. These boilers are selected when the steam
demand as well as steam pressure requirements are high as in the case of process cum power boiler / power boilers. Most modern water boiler tube designs are within the capacity range 4,500 – 120,000 kg/hour of steam, at very high pressures. Many water tube boilers are of “packaged” construction if oil and /or gas are to be used as fuel. Solid fuel fired water tube designs are available but packaged designs are less common. The features of water tube boilers are:
· Forced, induced and balanced draft provisions help to improve combustion efficiency.
· Less tolerance for water quality calls for water treatment plant.
· Higher thermal efficiency levels are possible
Figure 3. Simple Diagram of Water Tube Boiler
5.3 Packaged Boiler
The packaged boiler is so called because it comes as a complete package. Once delivered to a site, it requires only the steam, water pipe work, fuel supply and electrical connections to be made to become operational. Package boilers are generally of a shell type with a fire tube design so as to achieve high heat transfer rates by both radiation and convection
Figure 4. A typical 3 Pass, Oil fired packaged boiler
The features of packaged boilers are:
· Small combustion space and high heat release rate resulting in faster evaporation.
· Large number of small diameter tubes leading to good convective heat transfer.
· Forced or induced draft systems resulting in good combustion efficiency.
· Number of passes resulting in better overall heat transfer.
· Higher thermal efficiency levels compared with other boilers.
These boilers are classified based on the number of passes - the number of times the hot
combustion gases pass through the boiler. The combustion chamber is taken, as the first pass after which there may be one, two or three sets of fire-tubes. The most common boiler of this class is a three-pass unit with two sets of fire-tubes and with the exhaust gases exiting through the rear of the boiler.
5.4 Fluidized Bed Combustion (FBC) Boiler
Fluidized bed combustion (FBC) has emerged as a viable alternative and has significant advantages over a conventional firing system and offers multiple benefits – compact boiler design, fuel flexibility, higher combustion efficiency and reduced emission of noxious pollutants such as SOx and NOx. The fuels burnt in these boilers include coal, washery rejects, rice husk, biogases & other agricultural wastes. The fluidized bed boilers have a wide capacity range- 0.5 T/hr to over 100 T/hr.
When an evenly distributed air or gas is passed upward through a finely divided bed of solid particles such as sand supported on a fine mesh, the particles are undisturbed at low velocity. As air velocity is gradually increased, a stage is reached when the individual particles are suspended in the air stream – the bed is called “fluidized”. With further increase in air velocity, there is bubble formation, vigorous turbulence, rapid mixing and formation of dense defined bed surface. The bed of solid particles exhibits the properties of a boiling liquid and assumes the appearance of a fluid – “bubbling fluidized bed”. If sand particles in a fluidized state are heated to the ignition temperatures of coal, and coal is injected continuously into the bed, the coal will burn rapidly and the bed attains a uniform temperature. The fluidized bed combustion (FBC) takes place at about 840OC to 950OC. Since this temperature is much below the ash fusion temperature, melting of ash and associated problems are avoided.
The lower combustion temperature is achieved because of high coefficient of heat transfer due to rapid mixing in the fluidized bed and effective extraction of heat from the bed through
in-bed heat transfer tubes and walls of the bed. The gas velocity is maintained between minimum fluidization velocity and particle entrainment velocity. This ensures stable operation of the bed and avoids particle entrainment in the gas stream.
5.4.1 Atmospheric Fluidized Bed Combustion (AFBC) Boiler
Most operational boiler of this type is of the Atmospheric Fluidized Bed Combustion.
(AFBC). This involves little more than adding a fluidized bed combustor to a conventional shell boiler. Such systems have similarly being installed in conjunction with conventional water tube boiler.
Coal is crushed to a size of 1 – 10 mm depending on the rank of coal, type of fuel fed to the combustion chamber. The atmospheric air, which acts as both the fluidization and combustion air, is delivered at a pressure, after being preheated by the exhaust fuel gases. The in-bed tubes carrying water generally act as the evaporator. The gaseous products of combustion pass over the super heater sections of the boiler flowing past the economizer, the dust collectors and the air pre-heater before being exhausted to atmosphere.
5.4.2 Pressurized Fluidized Bed Combustion (PFBC) Boiler
In Pressurized Fluidized Bed Combustion (PFBC) type, a compressor supplies the Forced Draft (FD) air and the combustor is a pressure vessel. The heat release rate in the bed is proportional to the bed pressure and hence a deep bed is used to extract large amounts of heat. This will improve the combustion efficiency and sulphur dioxide absorption in the bed.
The steam is generated in the two tube bundles, one in the bed and one above it. Hot flue gases drive a power generating gas turbine. The PFBC system can be used for cogeneration (steam and electricity) or combined cycle power generation. The combined cycle operation (gas turbine & steam turbine) improves the overall conversion efficiency by 5 to 8 percent.
5.4.3 Atmospheric Circulating Fluidized Bed Combustion Boilers (CFBC)
In a circulating system the bed parameters are maintained to promote solids elutriation from the bed. They are lifted in a relatively dilute phase in a solids riser, and a down-comer with a cyclone provides a return path for the solids. There are no steam generation tubes immersed in the bed. Generation and super heating of steam takes place in the convection section, water walls, at the exit of the riser.
CFBC boilers are generally more economical than AFBC boilers for industrial application requiring more than 75 – 100 T/hr of steam. For large units, the taller furnace characteristics of CFBC boilers offers better space utilization, greater fuel particle and sorbent residence time for efficient combustion and SO2 capture, and easier application of staged combustion techniques for NOx control than AFBC steam generators.
Figure 5. CFBC Boiler
5.5 Stoker Fired Boilers
Stokers are classified according to the method of feeding fuel to the furnace and by the type of grate. The main classifications are spreader stoker and chain-gate or traveling-gate stoker.
5.5.1 Spreader stokers
Spreader stokers utilize a combination of suspension burning and grate burning. The coal is continually fed into the furnace above a burning bed of coal. The coal fines are burned in suspension; the larger particles fall to the grate, where they are burned in a thin, fast burning coal bed. This method of firing provides good flexibility to meet load fluctuations, since ignition is almost instantaneous when the firing rate is increased. Due to this, the spreader stoker is favored over other types of stokers in many industrial applications.
5.5.2 Chain-grate or traveling-grate stoker
Coal is fed onto one end of a moving steel grate. As the grate moves along the length of the furnace, the coal burns before dropping off at the end as ash. Some degree of skill is required, particularly when setting up the grate, air dampers and baffles, to ensure clean combustion leaving the minimum of unburnt carbon in the ash.
The coal-feed hopper runs along the entire coal-feed end of the furnace. A coal gate is used to control the rate at which coal is fed into the furnace by controlling the thickness of the fuel bed. Coal must be uniform in size as large lumps will not burn out completely by the time they reach the end of the grate
Figure 7. View of Traveling Grate Boiler
5.6 Pulverized Fuel Boiler
Most coal-fired power station boilers use pulverized coal, and many of the larger industrial water-tube boilers also use this pulverized fuel. This technology is well developed, and there are thousands of units around the world, accounting for well over 90 percent of coal-fired capacity.
The coal is ground (pulverized) to a fine powder, so that less than 2 percent is +300 micrometer (μm) and 70-75 percent is below 75 microns, for a bituminous coal. It should be noted that too fine a powder is wasteful of grinding mill power. On the other hand, too coarse a powder does not burn completely in the combustion chamber and results in higher unburnt losses.
The pulverized coal is blown with part of the combustion air into the boiler plant through a series of burner nozzles. Secondary and tertiary air may also be added. Combustion takes
place at temperatures from 1300-1700 °C, depending largely on coal grade. Particle residence time in the boiler is typically 2 to 5 seconds, and the particles must be small enough for complete combustion to have taken place during this time.
This system has many advantages such as ability to fire varying quality of coal, quick responses to changes in load, use of high pre-heat air temperatures etc. One of the most popular systems for firing pulverized coal is the tangential firing using four
burners corner to corner to create a fireball at the center of the furnace.
Figure 8: Tangential firing for pulverized fuel
5.7 Waste Heat Boiler
Wherever the waste heat is available at medium or high temperatures, a waste heat boiler can be installed economically. Wherever the steam demand is more than the steam generated during waste heat, auxiliary fuel burners are also used. If there is no direct use of steam, the steam may be let down in a steam turbinegenerator set and power produced from it. It is widely used in the heat recovery from exhaust gases from gas turbines and diesel engines.
Figure 9: A simple schematic of Waste HeatBoiler
5.8 Thermic Fluid Heater
In recent times, thermic fluid heaters have found wide application for indirect process heating. Employing petroleum - based fluids as the heat transfer medium, these heaters provide constantly maintainable temperatures for the user equipment. The combustion system comprises of a fixed grate with mechanical draft arrangements.
The modern oil fired thermic fluid heater consists of a double coil, three pass construction and fitted a with modulated pressure jet system. The thermic fluid, which acts as a heat carrier, is heated up in the heater and circulated through the user equipment. There it transfers heat for the process through a heat exchanger and the fluid is then returned to the heater. The flow of thermic fluid at the user end is controlled by a pneumatically operated control valve,
based on the operating temperature. The heater operates on low or high fire depending on the return oil temperature, which varies with the system load.
Figure 10. A typical configuration of Thermic Fluid Heater
The advantages of these heaters are:
· Closed cycle operation with minimum losses as compared to steam boilers.
· Non-Pressurized system operation even for temperatures around 250 0C as against 40 kg/cm2 steam pressure requirement in a similar steam system.
· Automatic control settings, which offer operational flexibility.
· Good thermal efficiencies as losses due to blow down, condensate drain and flash steam do not exist in a thermic fluid heater system.
The overall economics of the thermic fluid heater will depend upon the specific application and reference basis. Coal fired thermic fluid heaters with a thermal efficiency range of 55-65 percent may compare favorably with most boilers. Incorporation of heat recovery devices in
the flue gas path enhances the thermal efficiency levels further.
DESIGN AND FABRICATION OF STEAM WATER HEATER
(a) Two mild steel pipes.
(b) Copper tube
(c) Mild steel base plates
(d) Heating coil
Height of outer cylinder=30cm
Height of inner cylinder=30cm
Total length of the copper tube taken=1.5m
Diameter of the copper tube=.8cm
Power of the heating coil=2000watts
Distance between two cylinders=5mm
Diameter of the inner cylinder=11.5cm
Diameter of the outer cylinder=16.5cm
1. The base plate is drilled on two places for fixing the heating element and also for the copper tube outlet.
2. The inner cylinder is welded on to the base plate with the heating element inside.
3. Another plate is fixed on the top of the inner cylinder, with two holes drilled.
(a) One for refilling the water inside the inner cylinder
(b) One for the steam to pass through the copper tube
4. The copper tube is fixed to one of the holes and wound around the cylinder to the exact winding needed for the steam to convert into water at the outlet
5. The outer cylinder is welded to the plate.
6. Cold water is poured in between the two cylinders for heating.
Pressure and temperature can be measured directly or can be find out from the steam table. Specific volume can be obtained by measurement of the amount of heat added at constant pressure. Internal energy can be calculated from the formula for the definition of enthalpy
Here pressure 1.5 bar
And temperature corresponding to it is 127.6170 c.
Amount of heat added 1300 c.
Internal energy can be calculated from the formula from the definition of the enthalpy
Internal energy, h=125.248
Entropy can be calculated from it’s formula in terms of temperature.
Temperature T0 is an arbitrary base temperature (2730k for water) and specific heat Cp may be obtained from the formula
Specific heat of water=4.2539
Entropy, S=4.2539×ln (393/273)
COMPARISON BETWEEN STEAM HEATING AND DIRECT HEATING
· Pressure and temperature in steam heating is higher than direct heating.
· Heating efficiency is high in steam heating.
· Steam heating includes convection.
· Saves time and is a faster process.
· Saves energy and high efficiency.
Prior to executing the test protocol on the steam water heater systems, individual components were tested, resulting in: (1) Identification of experimental biases from use of the steam water heater in discrete single stage mode, (2) Selection of dip copper tube of varying lengths and an optimum exterior piping configuration and, (3) The effect of the quality of hot water flows induced by the circulating pump.
Using the experimental results as a guideline, this water system has a good economic payoff and we can save a great deal of money by building our own system. Some of the steam water heating systems are simple, low cost and are manageable systems to install or build. | http://www.morldtechgossips.com/2012/05/steam-water-heater.html | 13 |
50 | Fundamentals of Ionospheric Propagation.
The ionosphere is that region of the earth's atmosphere in which free ions and electrons exist in sufficient abundance to affect the properties of electromagnetic waves that are propagated within and through it. Ions are produced in the atmosphere partly by cosmic rays but mostly by solar radiation.
The latter include ultraviolet light, x rays, and particle radiation (during storm periods). These radiations are selectively absorbed by the several gaseous constituents of the atmosphere, ion-electron pairs being produced in the process. For practical purposes, the ionosphere can usually be assumed to extend from about 50 to roughly 2000 km above the earth's surface. The structure of the ionosphere is highly variable, and this variability is imparted onto signals propagated via the ionosphere. The ionosphere is divided into three vertical regions - - D, E, and F -- which increase in altitude and in electron density.
The D region has an altitude range from 50 to 90 km. The electron density in the region has large diurnal variations highly dependent upon solar zenith angle. The electron density is maximum near local noon, is higher in summer than in winter, and is lowest at night.
The E region spans the altitude range from about 90 to 130 km. The maximum density occurs near 100 km, although this height varies with local time. The diurnal and seasonal variations of electron density are similar to those of the D region. Collisions between electrons and neutral particles, while important in the E region, are not as numerous as in the D region. The E region acts principally as a reflector of hf waves, particularly during daylight hours.
Embedded within the E region is the so-called sporadic-E layer. This layer is an anomalous ionization layer that assumes different forms -- irregular and patchy, smooth and disklike -- and has little direct bearing to solar radiation. The causes of sporadic-E ionization are not fully known. The properties of the sporadic-E layer vary substantially with location and are markedly different at equatorial, temperate, and high latitudes. "Short-skip" openings, sometimes on an otherwise dead band, are often a result of one-hop sporadic-E ionization. When sporadic-E ionization is sufficiently widespread, multi-hop propagation is possible.
The highest ionospheric region is termed the F region. The lower part of the F region, from 130 to 200 km, is termed the F1 region, and the part above 200 km is termed the F2 region.
The F2 region is the highest ionospheric region, usually has the highest electron density, and is the region of greatest value in long- distance hf ionospheric propagation. The region exhibits large variability in both time and space in response to neutral winds and electrodynamic drifts in the presence of the earth's magnetic field. The maximum electron density generally occurs well after noon, sometimes in the evening hours. The height of the maximum ranges from 250 to 350 km at midlatitudes to 350 to 500 km at equatorial latitudes. At midlatitudes, the height of the maximum electron density is higher at night than in the daytime. At equatorial latitudes, the opposite behavior occurs.
The F1 region, like the E region, is under strong solar control. It reaches a maximum ionization level about one hour after local noon. At night and during the winter the F1 and F2 regions merge and are termed simply "F region".
Electromagnetic waves are refracted when passing through an ionized medium, the refraction increasing with increased electron density and decreasing with increase of frequency. If the refraction is large enough, a wave reaching the ionosphere is bent back toward earth as though it had been reflected, thereby permitting reception of the wave at a large distance from the transmitter. The F2 layer is the most important in this regard because of its height and its high electron density. The maximum earth distance traversed in one F2-layer "hop" is about 4000 km. Round-the-world communication can occur via multiple hops.
If the frequency is too high, the wave is not refracted sufficiently to return to earth. The maximum frequency for which a wave will propagate between two points is called the maximum usable frequency (MUF). Frequencies higher than the existing MUF at any given time are not supported, no matter how much power is used. However, because of the large variability that exists in the electron density of the F2 region, predicted MUFs are not absolute limits, but are statistical in nature. The actual MUF at any given time may be higher or lower than the predicted MUF. Predicted MUFs are intended to be median values; i.e., the actual MUF will exceed the predicted MUF 50 percent of the time, and will be less than the predicted FMUF 50 percent of the time. he predicted frequency that will be supported only 10 percent of the time is a frequency higher than the predicted MUF called the highest probable frequency (HPF), but even higher frequencies are possible 10 percent of the time. The predicted frequency that will be supported 90 percent of the time is a frequency lower than the predicted MUF called the optimum traffic frequency (FOT). Curves of predicted HPF, MUF, and FOT for several paths appear each month in QST.
Signals on their way to or from the F2 layer must pass through the E region of the ionosphere. The E layer is also capable of "reflecting" hf signals, and if the E-layer MUF is too high, the signals to or from the F2 layer are blocked -- or cut off -- by the E layer. Signals at frequencies below the ECOF will not pass through the E layer. Signals can propagate between two points on earth via the E layer in the same manner as they do via the F2 layer, but the maximum earth distance traversed in one E-layer hop is only about 2000 km, so a significantly greater number of hops is usually required on DX paths.
The D region of the ionosphere must be traversed by signals on their way to and from the F2 or E layers. Electron densities in the D region are not large enough to cause hf signals to be returned to earth, but the high collision frequency between the electrons and neutral particles in the D region gives rise to absorption of signals passing through it. The reduction of signal strength can be substantial, particularly in daytime on the lower hf frequencies. Antenna installations that provide low radiation (take-off) angles can minimize the number of hops required between two stations, thereby reducing the number of passes through the D region and the amount of signal absorption.
Electron density in the ionosphere increases with increased solar activity. Therefore, MUFs and signal absorption both increase as solar activity increases. The Zurich smoothed mean sunspot number has been used extensively as an index of solar activity and the one with which propagation data has been correlated over the years. Therefore, most propagation prediction models require that the user specify the sunspot number to be used in making a prediction. The 2800-MHz (10.7-cm) solar noise flux is generally considered a more accurate measure of solar activity, but with a smaller base of propagation observations. Since the two indices are highly correlated, either index may be used.
Ionospheric propagation is susceptible to several kinds of short- term disturbance that are usually associated with solar flares. Depending upon the nature of the disturbance, they are called sudden ionospheric disturbances, polar cap absorption events, or ionospheric storms. These disturbances upset the electron configuration in the ionosphere, and consequently affect propagation. Propagation is also affected by changes in the earth's magnetic field. The magnetic field is constantly fluctuating, but the fluctuation occurs over much wider limits during magnetic storms that accompany ionospheric storms. Ionospheric and magnetic storms are also often accompanied by visible aurora.
Except for the tendency of these disturbances to recur in synchronism with the 27-day rotation period of the sun, they are difficult for the amateur (as well as the professional) to predict and to quantify. The severity of magnetic disturbances is indicated by A and K indices that are broadcast by WWV at 18 minutes past each hour. The A index is a daily measure of geomagnetic field activity on a scale of 0 to 400. The K index is a measure, for a 3-hour period, of variation or disturbance in the geomagnetic field on a scale of 0 to 9. In general, MUFs decrease and signal absorption increases as geomagnetic field activity increases, although MUFs sometimes increase in equatorial regions.
--CONDITIONS FOR THE PAST 24 HOURS
--FORECAST FOR THE NEXT 24 HOURS
The sun's electromagnetic spectrum is a continuum of radiation spanning not only infrared, visible, and ultraviolet wavelengths, but the radio portions, x-rays and beyond. Sensors on the Earth and in space continuously observe specific portions of the sun's energy spectrum to monitor their levels and give scientists indications of when significant events occur.
Solar emissions in this category are all electromagnetic in nature, that is, they move at the speed of light. Events detected on the sun in these wavelengths begin to affect the Earth's environment around 8 minutes after they occur.
In addition to electromagnetic radiation, the sun constantly ejects matter in the form of atomic and subatomic particles. Consisting typically of electrons, protons, and helium nuclei, this tenuous gas is accelerated to speeds in excess of the sun's gravitational escape velocity and thus moves outward into the solar system. The collective term for the gas and the particles making them up is the Solar Wind. The sun's approximately 27-day rotation period results in the clouds being slung outward in an expanding spiral pattern which, at the earth-sun distance, overtakes the earth from behind as it moves along in it's orbit. As the clouds encounter the earth, the geomagnetic field and the earth's atmosphere prevents the solar wind particles from striking the planet directly. Magnetic interactions between the clouds and the geomagnetic field cause the solar wind particles to flow around the field, forming a shell-like hollow with the earth at the center. The hollow, known as the earth's Magnetosphere, is actually distorted into a comet shape with the head of the comet always pointing directly into the solar wind and the tail directly away. In the absence of significant solar activity, the solar wind is uniform with a velocity of approximately 400 km/second. Under these conditions, the earth's magnetosphere maintains a fairly steady shape and orientation in space. When disturbances occur on the sun, some clouds of solar particles can be blasted away at tremendous velocities. As these higher speed solar particle clouds encounter the earth's magnetosphere, they perturb it, changing the intensity and direction of the earth's magnetic field. This is analogous to a weather vane in gusty wind; sudden higher speed gusts can strike it and cause it to move around. Moreover, changes in solar wind density and velocity can cause the Earthís surface and are referred to as a "sudden impulse" (SI). Geomagnetic activity, including solar particle-caused variations in the geomagnetic field are carefully monitored by instruments both on the Earth and in space. High levels of geomagnetic activity act indirectly to degrade the ability of the ionosphere to propagate HF radio signals. So they are of interest to users of that portion of the radio frequency spectrum. Like the electromagnetic radiation portions of the sun's output, geomagnetic activity comprises another family of interactions observed and reported by groups such as IPS and SESC.
The Geophysical Alert Broadcasts consist of three primary sections to describe the Solar-terrestrial environment: The most current information, then a summary of activity for the past 24 hours, and finally a forecast for the next 24 hours. The actual wording of each section of the broadcast is explained below with a brief description of what is being reported. Similar wording is also used in other broadcasts, so the WWV example is relevant to other reports too.
"Solar-terrestrial indices for (UTC Date) follow: Solar flux (number) and (estimated) Boulder A index (number) . Repeat, solar flux (number) and (estimated) Boulder A index (number) . The Boulder K index at (UTC time) on (UTC Date) was (number) repeat (number) ."
Since the final A index is not available until 0000 UTC, the word "estimated" is used for the 1800 and 2100 UTC announcements.
Solar Flux is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a radio telescope located in Ottawa, Canada. Known also as the 10.7 cm flux (the wavelength of the radio signals at 2800 MHz), this solar radio emission has been shown to be proportional to sunspot activity. In addition, the level of the sun's ultraviolet and X-ray emissions is primarily responsible for causing ionization in the earth's upper atmosphere. It is these emissions which produce the ionized "layers" involved in propagating shortwave radio signals over long distances.
The solar flux number reported in the broadcast is in solar flux units (s. f. u.) and is recorded daily at Ottawa at 1700 UTC to be forwarded to the SESC. Solar flux readings range from a theoretical minimum of approximately 67 to actually-observed numbers greater than 300. Low solar flux numbers dominate during the lower portions of the 11-year sunspot cycle, rising as the cycle proceeds with the average solar flux a fairly reliable indicator of the cycle's long-term behavior. 1 s. f. u. = 10-22Watts/meter2 Hz = 104 jansky.
The A index is an averaged quantitative measure of geomagnetic activity derived from a series of physical measurements. Magnetometers measure differences between the current orientation of the magnetosphere and compare it to what it would be under "quiet" geomagnetic conditions. But there is more to understanding the meaning of the Boulder A index reported in the Geophysical Alert Broadcasts. The Boulder A index in the announcement is the 24 hour A index derived from the eight 3-hour K indices recorded at Boulder. The first estimate of the Boulder A index is at 1800 UTC. This estimate is made using the six observed Boulder K indices available at that time (0000 to 1800 UTC) and the SESC forecaster's best prediction for the remaining two K indices. To make those predictions, SESC forecasters examine present trends and other geomagnetic indicators. At 2100 UTC, the next observed Boulder K index is measured and the estimated A index is reevaluated and updated if necessary. At 0000 UTC, the eighth and last Boulder K index is measured and the actual Boulder A index is produced. For the 0000 UTC announcement and all subsequent announcements the word "estimated" is dropped and the actual Boulder A index is used. The underlying concept of the A index is to provide a longer-term picture of geomagnetic activity using measurements averaged either over some time frame or from a range of stations over the globe (or both). Numbers presented as A indices are the result of a several-step process: first, a magnetometer reading is taken to produce a K index for that station (see K INDEX below); the K index is adjusted for the station's geographical location to produce an a index (no typographical error here, it is a small case "a") for that 3-hour period; and finally a collection of a indices is averaged to produce an overall A index for the timeframe or region of interest.
A and a indices range in value from 0 to 400 and are derived from K-indices based on the table of equivalents shown in the APPENDIX.
The K index is the result of a 3-hourly magnetometer measurement comparing the current geomagnetic field orientation and intensity to what it would have been under geomagnetically "quiet" conditions. K index measurements are made at sites throughout the globe and each is carefully adjusted for the geomagnetic characteristics of its locality. The scale used is quasi logarithmic, increasing as the geomagnetic field becomes more disturbed. K indices range in value from 0 to 9. In the Geophysical Alert Broadcasts, the K index used is usually derived from magnetometer measurements made at the Table Mountain Observatory located just north of Boulder, Colorado. Every 3 hours new K indices are determined and the broadcasts are updated.
THE PAST 24 HOURS
"Solar-terrestrial conditions for the last 24 hours follow: Solar activity was (Very low, Low, Moderate, High, or Very high) , the geomagnetic field was (Quiet, Unsettled, Active, Minor storm, Major storm, Severe storm) ."
Solar activity is a measure of energy releases in the solar atmosphere, generally observed by X-ray detectors on earth-orbiting satellites. Somewhat different from longer-term Solar Flux measurements, Solar Activity data provide an overview of X-ray emissions that exceed prevailing levels. The five standard terms listed correspond to the following levels of enhanced X-ray emissions observed or predicted within a 24-hour period: Very Low - X-ray events less than C-class. Low - C-class x-ray events. Moderate - Isolated (1 to 4) M-class x-ray events. High - Several (5 or more) M-class x-ray events, or isolated (1 to 4) M5 or greater x-ray events. Very High - Several (5 or more) M5 or greater x-ray events.
The x-ray event classes listed correspond to a standardized method of classification based on the peak flux of the x-ray emissions as measured by detectors. Solar x-rays occupy a wide range of wavelengths with the portion used for flare classification from 0.1 through 0.8 nm. The classification scheme ranges in increasing x-ray peak flux from B-class events, through C- and M-class, to X-class events at the highest end (see APPENDIX).
In the Geophysical Alert Broadcasts, solar activity data provides an overview of x-ray emissions which might have effects on the quality of shortwave radio propagation. Large solar x-ray outbursts can produce sudden and extensive ionization in the lower regions of the earth's ionosphere which can rapidly increase shortwave signal absorption there. Occurring on the sun-facing side of the Earth, these sudden ionospheric disturbances are known as "shortwave fadeouts" and can degrade short wave communications for from minutes to hours. They are characterized by the initial disappearance of signals on lower frequencies with subsequent fading up the frequency spectrum over a short period (usually less than a hour). Daytime HF communication disruptions due to high solar activity are more common during the years surrounding the peak of the solar cycle. The sun rotates once approximately every 27 days, often carrying active regions on its surface to where they again face the Earth; periods of disruption can recur at about this interval as a result.
Rule of Thumb: The higher the solar activity, the better the conditions on the higher frequencies (i.e. 15, 17, 21, and 25 MHz). During a solar X-ray outburst, the lower frequencies are the first to suffer. Remember too that that signals crossing daylight paths will be the most affected. If you hear announcements on broadcast radio stations (e.g. Radio Netherlands) or via WWV/WWVH of such a solar disturbance try tuning to a HIGHER frequency. Higher frequencies are also the first to recover after a storm. Note that this is the opposite to disturbances indirectly caused by geomagnetic storms.
As an overall assessment of natural variations in the geomagnetic field, six standard terms are used in reporting geomagnetic activity. The terminology is based on the estimated A index for the 24-hour period directly preceding the time the broadcast was last updated:
Category - Range of A-index
Quiet - 0-7
Unsettled - 8-15
Active - 16-29
Minor storm - 30-49
Major storm - 50-99
Severe storm - 100-400
These standardized terms correspond to the range of a and A indices previously explained in the A INDEX section. Increasing geomagnetic activity corresponds to more and greater perturbations of the geomagnetic field as a result of variations in the solar wind and more energetic solar particle emissions. Using the earlier analogy, imagine the geomagnetic field to be like a weather vane in an increasingly violent windstorm. As the winds increase, the weather vane is continually buffeted by gusts and oscillates about the direction of the prevailing wind. Essentially, the reported geomagnetic activity category corresponds to how violently the geomagnetic field is being knocked about.
For shortwave radio spectrum users, high geomagnetic activity tends to degrade the quality of communications because geomagnetic field disturbances also diminish the capabilities of the ionosphere to propagate radio signals. In and near the auroral zone, absorption of radio energy in the ionosphereís D region (about 80 km high) can increase dramatically , especially in the lower portions of the HF band. Signals passing through these regions can become unusable. Geomagnetic disturbances in the middle latitudes can decrease the density of electrons in the ionosphere and thus the maximum radio frequency the region will propagate. Extended periods of geomagnetic activity known as geomagnetic storms can last for days. The impact on radio propagation during the storm depends on the level of solar flux and the severity of the geomagnetic field disturbance. During some geomagnetic storms, worldwide disruptions of the ionosphere are possible. Called ionospheric storms, short wave propagation via the ionosphereís F region (about 300 km high) can be affected. Here, middle latitude propagation can be diminished while propagation at low latitudes is improved. Ionospheric storms may or may not accompany geomagnetic activity, depending on the severity of the activity, its recent history, and the level of the solar flux.
Rule of thumb: Oversimplification is dangerous in the complex field of propagation. We know much less about the "radio weather" than ordinary weather. In general though, for long distance medium-wave listening, the A index should be under 14, and the solar activity low-moderate. If the A-index drops under 7 for a few days in a row (usually during sunspot minimum conditions) look out for really excellent intercontinental conditions (e.g. trans Atlantic reception).
During minor geomagnetic storms, signals from the equatorial regions of the world are least affected. On the 60 and 90 metre tropical bands you can expect interference from utility stations in Europe/North America/Australia to be lower. Sometimes, this means that weaker signals from the tropics can get through, albeit they may suffer fluttery fading. Signals on the higher frequencies fade out first during a geomagnetic storm. Signals that travel anywhere near the North or South Pole may disappear or suffer chronic fading.
FORECAST FOR THE
NEXT 24 HOURS
"The forecast for the next 24 hours follows: Solar activity will be (Very low, Low, Moderate, High, or Very high). The geomagnetic field will be (Quiet, Unsettled, Active, Minor storm, Major storm, Severe storm)."
The quantitative criteria for the solar activity forecast are identical to the "Conditions for the past 24 hours" portion of the broadcast as explained previously except that the forecaster is using all available measurement and trend information to make as informed a projection as possible.
Some of the key elements in making the forecast include the number and types of sunspots and other regions of interest on the sun's surface as well as what kinds of energetic events have occurred recently.
The same six standardized terms are used as in the "Conditions for the past 24 hours" portion of the broadcast with the forecast mainly based on current geomagnetic activity, recent events on the sun whose effects could influence geomagnetic conditions, and longer-term considerations such as the time of year and the state of the sunspot cycle.
a index. A 3-hourly "equivalent amplitude" of geomagnetic activity for a specific station or network of stations expressing the range of disturbance of the geomagnetic field. The a index is scaled from the 3-hourly K index according to the following table:
K 0 1 2 3 4
5 6 7 8 9
a 0 3 7 15 27 48 80 140 240 400
X-Ray flares from the Sun
Solar flares are rated according to the extent to which they emit x-rays in the 1 to 8 Angstrom band. Solar x-rays are classified into one of 5 different categories. These classes are categorized as follows:
Class A: X-ray emissions that are less than 10^-7 watts per
square meter (or Wm^-2).
Class B: X-ray emissions that range between 10^-7 and 10^-6 Wm^-2.
Class C: X-rays that range between 10^-6 and 10^-5 Wm^-2.
Class M: X-rays that range between 10^-5 and 10^-4 Wm^-2.
Class X: X-rays that reach or exceed 10^-4 Wm^-2.
You will notice that each of these classifications differs from each other neighboring class by a power of 10. In other words, x-rays behave according to a power-law. Class B flares are 10 times more powerful in x-rays than Class A flares. Similarly, class X-flares are 1,000 times more powerful than class B flares. Most solar flares never reach M-class levels. Those that do are considered minor energetic flares. Solar flares that reach or exceed 5.0 x 10^-5 watts per square meter (class M5.0 or larger) are considered major energetic M-class flares. These types of major flares are much less frequent than minor M-class flares, which can occur a handful of times each month during the more active years near the solar maximum. X-class solar flares are the rarest and often the most powerful of all types of solar flares. They can occur at any time during the solar cycle, but prefer periods near the solar maximum when sunspot regions are complex enough to generate the dynamics required to produce such powerful solar explosions.
Ranking of a flare based on its x-ray output. Flares are
classified according to the order of magnitude of the peak burst intensity (I) measured at
the earth in the 0.1 to 0.8 nm wavelength band as follows:
Class Peak, 0.1 to 0.8 nm band (Watts/square metre)
B I < 10-6
C 10-6 I < 10-5
M 10-5 I < 10-4
X I 10-4
A multiplier is used to indicate the level within each class.
M6 = 6 X 10-5 Watts/square metre
Flares are sometimes associated with what is known as a Type II spectral radio burst (or a sweep frequency event). These radio emissions are produced when a shock wave from the solar flare excites the lower coronal plasma as it propagates outward. Type II sweeps are often an indication of a coronal mass ejection and such events are fairly common with large X-class solar flares.
The x-rays from flares can be intense enough to have a considerable impact on ionospheric radio communications. Ionospherically propagated radio signals that travel through the sunlit hemisphere of the Earth can experience heavy absorption caused by the intense x-rays from the solar flare. In some cases, the absorption can be strong enough to completely blackout all radio communications between points more than 3,000 to 4,000 km up to frequencies as high as 10 MHz for a period of between 15 to 30 minutes. Minor absorption can maintain weaker than normal signal strengths for an additional 20 to 30 minutes. | http://www.qsl.net/co8tw/propagat.htm | 13 |
62 | |Geometry and topology index||History Topics Index|
In about 300 BC Euclid wrote The Elements, a book which was to become one of the most famous books ever written. Euclid stated five postulates on which he based all his theorems:
Proclus (410-485) wrote a commentary on The Elements where he comments on attempted proofs to deduce the fifth postulate from the other four, in particular he notes that Ptolemy had produced a false 'proof'. Proclus then goes on to give a false proof of his own. However he did give the following postulate which is equivalent to the fifth postulate.
Playfair's Axiom:- Given a line and a point not on the line, it is possible to draw exactly one line through the given point parallel to the line.
Although known from the time of Proclus, this became known as Playfair's Axiom after John Playfair wrote a famous commentary on Euclid in 1795 in which he proposed replacing Euclid's fifth postulate by this axiom.
Many attempts were made to prove the fifth postulate from the other four, many of them being accepted as proofs for long periods of time until the mistake was found. Invariably the mistake was assuming some 'obvious' property which turned out to be equivalent to the fifth postulate. One such 'proof' was given by Wallis in 1663 when he thought he had deduced the fifth postulate, but he had actually shown it to be equivalent to:-
To each triangle, there exists a similar triangle of arbitrary magnitude.
One of the attempted proofs turned out to be more important than most others. It was produced in 1697 by Girolamo Saccheri. The importance of Saccheri's work was that he assumed the fifth postulate false and attempted to derive a contradiction.
Here is the Saccheri quadrilateral
In this figure Saccheri proved that the summit angles at D and C were equal.The proof uses properties of congruent triangles which Euclid proved in Propositions 4 and 8 which are proved before the fifth postulate is used. Saccheri has shown:
a) The summit angles are > 90° (hypothesis of the obtuse angle).
b) The summit angles are < 90° (hypothesis of the acute angle).
c) The summit angles are = 90° (hypothesis of the right angle).
Euclid's fifth postulate is c). Saccheri proved that the hypothesis of the obtuse angle implied the fifth postulate, so obtaining a contradiction. Saccheri then studied the hypothesis of the acute angle and derived many theorems of non-Euclidean geometry without realising what he was doing. However he eventually 'proved' that the hypothesis of the acute angle led to a contradiction by assuming that there is a 'point at infinity' which lies on a plane.
In 1766 Lambert followed a similar line to Saccheri. However he did not fall into the trap that Saccheri fell into and investigated the hypothesis of the acute angle without obtaining a contradiction. Lambert noticed that, in this new geometry, the angle sum of a triangle increased as the area of the triangle decreased.
Legendre spent 40 years of his life working on the parallel postulate and the work appears in appendices to various editions of his highly successful geometry book Eléments de Géométrie. Legendre proved that Euclid's fifth postulate is equivalent to:-
The sum of the angles of a triangle is equal to two right angles.
Legendre showed, as Saccheri had over 100 years earlier, that the sum of the angles of a triangle cannot be greater than two right angles. This, again like Saccheri, rested on the fact that straight lines were infinite. In trying to show that the angle sum cannot be less than 180° Legendre assumed that through any point in the interior of an angle it is always possible to draw a line which meets both sides of the angle. This turns out to be another equivalent form of the fifth postulate, but Legendre never realised his error himself.
Elementary geometry was by this time engulfed in the problems of the parallel postulate. D'Alembert, in 1767, called it the scandal of elementary geometry.
The first person to really come to understand the problem of the parallels was Gauss. He began work on the fifth postulate in 1792 while only 15 years old, at first attempting to prove the parallels postulate from the other four. By 1813 he had made little progress and wrote:
In the theory of parallels we are even now not further than Euclid. This is a shameful part of mathematics...
However by 1817 Gauss had become convinced that the fifth postulate was independent of the other four postulates. He began to work out the consequences of a geometry in which more than one line can be drawn through a given point parallel to a given line. Perhaps most surprisingly of all Gauss never published this work but kept it a secret. At this time thinking was dominated by Kant who had stated that Euclidean geometry is the inevitable necessity of thought and Gauss disliked controversy.
Gauss discussed the theory of parallels with his friend, the mathematician Farkas Bolyai who made several false proofs of the parallel postulate. Farkas Bolyai taught his son, János Bolyai, mathematics but, despite advising his son not to waste one hour's time on that problem of the problem of the fifth postulate, János Bolyai did work on the problem.
In 1823 János Bolyai wrote to his father saying I have discovered things so wonderful that I was astounded ... out of nothing I have created a strange new world. However it took Bolyai a further two years before it was all written down and he published his strange new world as a 24 page appendix to his father's book, although just to confuse future generations the appendix was published before the book itself.
Gauss, after reading the 24 pages, described János Bolyai in these words while writing to a friend: I regard this young geometer Bolyai as a genius of the first order . However in some sense Bolyai only assumed that the new geometry was possible. He then followed the consequences in a not too dissimilar fashion from those who had chosen to assume the fifth postulate was false and seek a contradiction. However the real breakthrough was the belief that the new geometry was possible. Gauss, however impressed he sounded in the above quote with Bolyai, rather devastated Bolyai by telling him that he (Gauss) had discovered all this earlier but had not published. Although this must undoubtedly have been true, it detracts in no way from Bolyai's incredible breakthrough.
Nor is Bolyai's work diminished because Lobachevsky published a work on non-Euclidean geometry in 1829. Neither Bolyai nor Gauss knew of Lobachevsky's work, mainly because it was only published in Russian in the Kazan Messenger a local university publication. Lobachevsky's attempt to reach a wider audience had failed when his paper was rejected by Ostrogradski.
In fact Lobachevsky fared no better than Bolyai in gaining public recognition for his momentous work. He published Geometrical investigations on the theory of parallels in 1840 which, in its 61 pages, gives the clearest account of Lobachevsky's work. The publication of an account in French in Crelle's Journal in 1837 brought his work on non-Euclidean geometry to a wide audience but the mathematical community was not ready to accept ideas so revolutionary.
In Lobachevsky's 1840 booklet he explains clearly how his non-Euclidean geometry works.
All straight lines which in a plane go out from a point can, with reference to a given straight line in the same plane, be divided into two classes - into cutting and non-cutting. The boundary lines of the one and the other class of those lines will be called parallel to the given line.
Here is the Lobachevsky's diagram
Hence Lobachevsky has replaced the fifth postulate of Euclid by:-
Lobachevsky's Parallel Postulate. There exist two lines parallel to a given line through a given point not on the line.
Lobachevsky went on to develop many trigonometric identities for triangles which held in this geometry, showing that as the triangle became small the identities tended to the usual trigonometric identities.
Riemann, who wrote his doctoral dissertation under Gauss's supervision, gave an inaugural lecture on 10 June 1854 in which he reformulated the whole concept of geometry which he saw as a space with enough extra structure to be able to measure things like length. This lecture was not published until 1868, two years after Riemann's death but was to have a profound influence on the development of a wealth of different geometries. Riemann briefly discussed a 'spherical' geometry in which every line through a point P not on a line AB meets the line AB. In this geometry no parallels are possible.
It is important to realise that neither Bolyai's nor Lobachevsky's description of their new geometry had been proved to be consistent. In fact it was no different from Euclidean geometry in this respect although the many centuries of work with Euclidean geometry was sufficient to convince mathematicians that no contradiction would ever appear within it.
The first person to put the Bolyai - Lobachevsky non-Euclidean geometry on the same footing as Euclidean geometry was Eugenio Beltrami (1835-1900). In 1868 he wrote a paper Essay on the interpretation of non-Euclidean geometry which produced a model for 2-dimensional non-Euclidean geometry within 3-dimensional Euclidean geometry. The model was obtained on the surface of revolution of a tractrix about its asymptote. This is sometimes called a pseudo-sphere.
You can see the graph of a tractrix and what the top half of a Pseudo-sphere looks like.
In fact Beltrami's model was incomplete but it certainly gave a final decision on the fifth postulate of Euclid since the model provided a setting in which Euclid's first four postulates held but the fifth did not hold. It reduced the problem of consistency of the axioms of non-Euclidean geometry to that of the consistency of the axioms of Euclidean geometry.
Beltrami's work on a model of Bolyai - Lobachevsky's non-Euclidean geometry was completed by Klein in 1871. Klein went further than this and gave models of other non-Euclidean geometries such as Riemann's spherical geometry. Klein's work was based on a notion of distance defined by Cayley in 1859 when he proposed a generalised definition for distance.
Klein showed that there are three basically different types of geometry. In the Bolyai - Lobachevsky type of geometry, straight lines have two infinitely distant points. In the Riemann type of spherical geometry, lines have no (or more precisely two imaginary) infinitely distant points. Euclidean geometry is a limiting case between the two where for each line there are two coincident infinitely distant points.
References (23 books/articles)
Other Web sites:
Article by: J J O'Connor and E F Robertson
|History Topics Index||Geometry and topology index|
|Main index||Biographies Index
|Famous curves index||Birthplace Maps
|Mathematicians of the day||Anniversaries for the year
|Search Form|| Societies, honours, etc
The URL of this page is: | http://turnbull.dcs.st-and.ac.uk/history/HistTopics/Non-Euclidean_geometry.html | 13 |
50 | Although many of the weather satellite systems (such as those described in the previous section) are also used for monitoring the Earth's surface, they are not optimized for detailed mapping of the land surface. Driven by the exciting views from, and great success of the early meteorological satellites in the 1960's, as well as from images taken during manned spacecraft missions, the first satellite designed specifically to monitor the Earth's surface, Landsat-1, was launched by NASA in 1972. Initially referred to as ERTS-1, (Earth Resources Technology Satellite), Landsat was designed as an experiment to test the feasibility of collecting multi-spectral Earth observation data from an unmanned satellite platform. Since that time, this highly successful program has collected an abundance of data from around the world from several Landsat satellites. Originally managed by NASA, responsibility for the Landsat program was transferred to NOAA in 1983. In 1985, the program became commercialized, providing data to civilian and applications users.
Landsat's success is due to several factors, including: a combination of sensors with spectral bands tailored to Earth observation; functional spatial resolution; and good areal coverage (swath width and revisit period). The long lifespan of the program has provided a voluminous archive of Earth resource data facilitating long term monitoring and historical records and research. All Landsat satellites are placed in near-polar, sun-synchronous orbits. The first three satellites (Landsats 1-3) are at altitudes around 900 km and have revisit periods of 18 days while the later satellites are at around 700 km and have revisit periods of 16 days. All Landsat satellites have equator crossing times in the morning to optimize illumination conditions.
A number of sensors have been on board the Landsat series of satellites, including the Return Beam Vidicon (RBV) camera systems, the MultiSpectral Scanner (MSS) systems, and the Thematic Mapper (TM). The most popular instrument in the early days of Landsat was the MultiSpectral Scanner (MSS) and later the Thematic Mapper (TM). Each of these sensors collected data over a swath width of 185 km, with a full scene being defined as 185 km x 185 km.
The MSS senses the electromagnetic radiation from the Earth's surface in four spectral bands. Each band has a spatial resolution of approximately 60 x 80 metres and a radiometric resolution of 6 bits, or 64 digital numbers. Sensing is accomplished with a line scanning device using an oscillating mirror. Six scan lines are collected simultaneously with each west-to-east sweep of the scanning mirror. The accompanying table outlines the spectral wavelength ranges for the MSS.
|Channel||Wavelength Range (μm)|
|Landsat 1,2,3||Landsat 4,5|
|MSS 4||MSS 1||0.5 - 0.6 (green)|
|MSS 5||MSS 2||0.6 - 0.7 (red)|
|MSS 6||MSS 3||0.7 - 0.8 (near infrared)|
|MSS 7||MSS 4||0.8 - 1.1 (near infrared)|
Routine collection of MSS data ceased in 1992, as the use of TM data, starting on Landsat 4, superseded the MSS. The TM sensor provides several improvements over the MSS sensor including: higher spatial and radiometric resolution; finer spectral bands; seven as opposed to four spectral bands; and an increase in the number of detectors per band (16 for the non-thermal channels versus six for MSS). Sixteen scan lines are captured simultaneously for each non-thermal spectral band (four for thermal band), using an oscillating mirror which scans during both the forward (west-to-east) and reverse (east-to-west) sweeps of the scanning mirror. This difference from the MSS increases the dwell time (see section 2.8) and improves the geometric and radiometric integrity of the data. Spatial resolution of TM is 30 m for all but the thermal infrared band which is 120 m. All channels are recorded over a range of 256 digital numbers (8 bits). The accompanying table outlines the spectral resolution of the individual TM bands and some useful applications of each.
|Channel||Wavelength Range (µm)||Application|
|TM 1||0.45 - 0.52 (blue)||soil/vegetation discrimination; bathymetry/coastal mapping; cultural/urban feature identification|
|TM 2||0.52 - 0.60 (green)||green vegetation mapping (measures reflectance peak); cultural/urban feature identification|
|TM 3||0.63 - 0.69 (red)||vegetated vs. non-vegetated and plant species discrimination (plant chlorophyll absorption); cultural/urban feature identification|
|TM 4||0.76 - 0.90 (near IR)||identification of plant/vegetation types, health, and biomass content; water body delineation; soil moisture|
|TM 5||1.55 - 1.75 (short wave IR)||sensitive to moisture in soil and vegetation; discriminating snow and cloud-covered areas|
|TM 6||10.4 - 12.5 (thermal IR)||vegetation stress and soil moisture discrimination related to thermal radiation; thermal mapping (urban, water)|
|TM 7||2.08 - 2.35 (short wave IR)||discrimination of mineral and rock types; sensitive to vegetation moisture content|
Data from both the TM and MSS sensors are used for a wide variety of applications, including resource management, mapping, environmental monitoring, and change detection (e.g. monitoring forest clearcutting). The archives of Canadian imagery include over 350,000 scenes of MSS and over 200,000 scenes of TM, managed by the licensed distributor in Canada: RSI Inc. Many more scenes are held by foreign facilities around the world.
SPOT (Système Pour l'Observation de la Terre) is a series of Earth observation imaging satellites designed and launched by CNES (Centre National d'Études Spatiales) of France, with support from Sweden and Belgium. SPOT-1 was launched in 1986, with successors following every three or four years. All satellites are in sun-synchronous, near-polar orbits at altitudes around 830 km above the Earth, which results in orbit repetition every 26 days. They have equator crossing times around 10:30 AM local solar time. SPOT was designed to be a commercial provider of Earth observation data, and was the first satellite to use along-track, or pushbroom scanning technology.
The SPOT satellites each have twin high resolution visible (HRV) imaging systems, which can be operated independently and simultaneously. Each HRV is capable of sensing either in a high spatial resolution single-channel panchromatic (PLA) mode, or a coarser spatial resolution three-channel multispectral (MLA) mode. Each along-track scanning HRV sensor consists of four linear arrays of detectors: one 6000 element array for the panchromatic mode recording at a spatial resolution of 10 m, and one 3000 element array for each of the three multispectral bands, recording at 20 m spatial resolution. The swath width for both modes is 60 km at nadir. The accompanying table illustrates the spectral characteristics of the two different modes.
|Mode/Band||Wavelength Range (μm)|
|Panchromatic (PLA)||0.51 - 0.73 (blue-green-red)|
|Band 1||0.50 - 0.59 (green)|
|Band 2||0.61 - 0.68 (red)|
|Band 3||0.79 - 0.89 (near infrared)|
The viewing angle of the sensors can be adjusted to look to either side of the satellite's vertical (nadir) track, allowing off-nadir viewing which increases the satellite's revisit capability. This ability to point the sensors up to 27° from nadir, allows SPOT to view within a 950 km swath and to revisit any location several times per week. As the sensors point away from nadir, the swath varies from 60 to 80 km in width. This not only improves the ability to monitor specific locations and increases the chances of obtaining cloud free scenes, but the off-nadir viewing also provides the capability of acquiring imagery for stereoscopic coverage. By recording the same area from two different angles, the imagery can be viewed and analyzed as a three dimensional model, a technique of tremendous value for terrain interpretation, mapping, and visual terrain simulations.
This oblique viewing capability increases the revisit frequency of equatorial regions to three days (seven times during the 26 day orbital cycle). Areas at a latitude of 45º can be imaged more frequently (11 times in 26 days) due to the convergence or orbit paths towards the poles. By pointing both HRV sensors to cover adjacent ground swaths at nadir, a swath of 117 km (3 km overlap between the two swaths) can be imaged. In this mode of operation, either panchromatic or multispectral data can be collected, but not both simultaneously.
SPOT has a number of benefits over other spaceborne optical sensors. Its fine spatial resolution and pointable sensors are the primary reasons for its popularity. The three-band multispectral data are well suited to displaying as false-colour images and the panchromatic band can also be used to "sharpen" the spatial detail in the multispectral data. SPOT allows applications requiring fine spatial detail (such as urban mapping) to be addressed while retaining the cost and timeliness advantage of satellite data. The potential applications of SPOT data are numerous. Applications requiring frequent monitoring (agriculture, forestry) are well served by the SPOT sensors. The acquisition of stereoscopic imagery from SPOT has played an important role in mapping applications and in the derivation of topographic information (Digital Elevation Models - DEMs) from satellite data.
The Indian Remote Sensing (IRS) satellite series, combines features from both the Landsat MSS/TM sensors and the SPOT HRV sensor. The third satellite in the series, IRS-1C, launched in December, 1995 has three sensors: a single-channel panchromatic (PAN) high resolution camera, a medium resolution four-channel Linear Imaging Self-scanning Sensor (LISS-III), and a coarse resolution two-channel Wide Field Sensor (WiFS). The accompanying table outlines the specific characteristics of each sensor.
|Panchromatic||0.5 - 0.75||5.8||70||24|
|Green||0.52 – 0.59||23||142||24|
|Red||0.62 – 0.68||23||142||24|
|Near IR||0.77 – 0.86||23||142||24|
|Shortwave IR||1.55 – 1.70||70||148||24|
|Red||0.62 – 0.68||188||774||5|
|Near IR||0.77 – 0.86||188||774||5|
In addition to its high spatial resolution, the panchromatic sensor can be steered up to 26° across-track, enabling stereoscopic imaging and increased revisit capablilities (as few as five days), similar to SPOT. This high resolution data is useful for urban planning and mapping applications. The four LISS-III multispectral bands are similar to Landsat's TM bands 1 to 4 and are excellent for vegetation discrimination, land-cover mapping, and natural resource planning. The WiFS sensor is similar to NOAA AVHRR bands and the spatial resolution and coverage is useful for regional scale vegetation monitoring.
MEIS-II and CASI
Although this tutorial concentrates on satellite-borne sensors, it is worth mentioning a couple of Canadian airborne sensors which have been used for various remote sensing applications, as these systems (and others like them) have influenced the design and development of satellite systems. The first is the MEIS-II (Multispectral Electro-optical Imaging Scanner) sensor developed for the Canada Centre for Remote Sensing. Although no longer active, MEIS was the first operational use of pushbroom, or along-track scanning technology in an airborne platform. The sensor collected 8-bit data (256 digital numbers) in eight spectral bands ranging from 0.39 to 1.1 μm, using linear arrays of 1728 detectors per band. The specific wavelength ranges were selectable, allowing different band combinations to be used for different applications. Stereo imaging from a single flight line was also possible, with channels aimed ahead of and behind nadir, supplementing the other nadir facing sensors. Both the stereo mapping and the selectable band capabilities were useful in research and development which was applied to development of other satellite (and airborne) sensor systems.
CASI, the Compact Airborne Spectrographic Imager, is a leader in airborne imaging, being the first commercial imaging spectrometer. This hyperspectral sensor detects a vast array of narrow spectral bands in the visible and infrared wavelengths, using along-track scanning. The spectral range covered by the 288 channels is between 0.4 and 0.9 μm. Each band covers a wavelength range of 0.018 μm. While spatial resolution depends on the altitude of the aircraft, the spectral bands measured and the bandwidths used are all programmable to meet the user's specifications and requirements. Hyperspectral sensors such as this can be important sources of diagnostic information about specific targets' absorption and reflection characteristics, in effect providing a spectral 'fingerprint'. Experimentation with CASI and other airborne imaging spectrometers has helped guide the development of hyperspectral sensor systems for advanced satellite systems.
Did you know?
"...Land, Ho, matey!..."
...the ERTS (Earth Resources Technology Satellite) program was renamed to Landsat just prior to the launch of the second satellite in the series. The Landsat title was used to distinguish the program from another satellite program in the planning stages, called Seasat, intended primarily for oceanographic applications. The first (and only) Seasat satellite was successfully launched in 1978, but unfortunately was only operational for 99 days. Even though the satellite was short-lived and the Seasat program was discontinued, it collected some of the first RADAR images from space which helped heighten the interest in satellite RADAR remote sensing. Today, several RADAR satellites are operational or planned. We will learn more about RADAR and these satellites in the next chapter.
...originally the MSS sensor numbering scheme (bands 4, 5, 6, and 7) came from their numerical sequence after the three bands of the RBV (Return Beam Vidicon) sensors. However, due to technical malfunctions with the RBV sensor and the fact that it was dropped from the satellite sensor payload with the launch of Landsat-4, the MSS bands were renumbered from 1 to 4. For the TM sensor, if we look at the wavelength ranges for each of the bands, we see that TM6 and TM7 are out of order in terms of increasing wavelength. This was because the TM7 channel was added as an afterthought late in the original system design process.
Explain why data from the Landsat TM sensor might be considered more useful than data from the original MSS sensor. Hint: Think about their spatial, spectral, and radiometric resolutions.
The answer is...
Whiz quiz - answer
There are several reasons why TM data may be considered more useful than MSS data. Although the areal coverage of a TM scene is virtually the same as a MSS scene, TM offers higher spatial, spectral, and radiometric resolution. The spatial resolution is 30 m compared to 80 m (except for the TM thermal channels, which are 120 m to 240 m). Thus, the level of spatial detail detectable in TM data is better. TM has more spectral channels which are narrower and better placed in the spectrum for certain applications, particularly vegetation discrimination. In addition, the increase from 6 bits to 8 bits for data recording represents a four-fold increase in the radiometric resolution of the data. (Remember, 6 bits = 26 = 64, and 8 bits = 28 = 256 - therefore, 256/64 = 4). However, this does not mean that TM data are "better" than MSS data. Indeed, MSS data are still used to this day and provide an excellent data source for many applications. If the desired information cannot be extracted from MSS data, then perhaps the higher spatial, spectral, and radiometric resolution of TM data may be more useful. | http://www.nrcan.gc.ca/earth-sciences/geography-boundary/remote-sensing/fundamentals/2042 | 13 |