score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
59 | A CIRCLE is a plane figure bounded by one line called the circumference, such that all straight lines drawn from the center to the circumference are equal to one another.
A straight line from the center to the circumference is a called a radius. A diameter is a straight line through the center and terminating in both directions on the circumference.
A radius, then, is half of a diameter; or, equivalently, a diameter is twice a radius:
D = 2r.
The definition of π
The student no doubt knows a value for the famous irrational number π 3.14 but that is not its definition. What, in fact, is the meaning of the symbol "π"?
π symbolizes the ratio -- the relationship with respect to relative size
So when we say that π is approximately 3.14, we mean that the circumference of circle is a little more than three times the diameter.
It should be intuitively clear that π cannot be a rational number, because it indicates the ratio of a curved line to a straight. And to name such a ratio exactly is impossible. In the next Topic, we will see how to approximate π.
In any case, since
then we use that as a formula for calculating the circumference of a circle:
Or, since D = 2r,
C = π· 2r = 2πr.
Problem 1. Calculate the circumference of each circle. Take π = 3.14.
To see the answer, pass your mouse over the colored area. To cover the answer again, click "Refresh" ("Reload").
a) The diameter is 5 cm. 3.14 × 5 = 15.7 cm
b) The radius is 5 cm. 3.14 × 2 × 5 = 3.14 × 10 = 31.4 cm
Problem 2. The average distance of the earth from the sun is approximately 93 million miles; assuming that the earth's path around the sun is a circle, approximately how many miles does the earth travel in a year?
C = π × 2r = 3.14 × 2 × 93 million = 584.04 million miles.
How do we know that the circumference of every circle has the same ratio π to its diameter? The following theorem assures us.
Circles are to one another as their circumscribed squares.
Remarkably enough, as we will see, the theorem applies both to the boundaries and the areas:
If C1, C2 are the circumferences of any two circles, and D1, D2 their diameters,
been called π,
perimeter of the circumscribed square. (And since π is a bit more than 3, we see that the circumference is a bit more than three fourths of that perimeter.)
Next, statement 2):
If A1, A2 are the areas of any two circles, and D1, D2 their diameters,
and therefore alternately,
For when we prove that the area of a circle is
-- then we have one of the most remarkable theorems in all of geometry:
The circumference of a circle is to the perimeter
Again, since π is approximately 3, then just as the circumference is a bit more than three fourths of the perimeter of the square, so the area of the circle will be a bit more than three fourths of the area of the square.
What is more, it is possible to prove the following:
If we took just the first two terms of that series, that would tell us that the circle is approximately two thirds of the square. If we took three terms, we would know that the circle is appoximately thirteen fifteenths of the square. Each term brings us a little more and then a little less than the actual ratio that the circle has to the circumscribed square. But we
If there is anything in mathematics that deserves to be called beautiful, it is here. We find such beauty especially when geometry is reflected by simple arithmetic. The Pythagorean theorem is another example: 3² + 4² = 5². We discover those relationships in those archetypal forms. We do not invent them.
Please make a donation to keep TheMathPage online.
Copyright © 2012 Lawrence Spector
Questions or comments? | http://themathpage.com/aTrig/circle.htm | 13 |
51 | Translations, Reflections and Rotations
Before we continue, you need to know what Translations, Reflections, and Rotations are. Let us start with the following image of a triangle.
- Translation means moving the image horizontally (along the x axis) or vertically (along the y axis).
The image above was translated a few units horizontally (along the x axis) and a few units vertically (along the y axis).
- Reflection means flipping the image either over the x axis (a horizontal line) or over the y axis (a vertical line).
The image above was flipped over the x axis (a horizontal line).
- Rotation means moving the image from a pivot point.
The image above was rotated 90 degrees clockwise from a certain pivot point.
Notice that in all of the operations performed on the triangle above, none of the operations changed the angles of the triangle, or the lengths of any of the line segments. In all of the operations shown above, the only things that change are the location of the three points that make up the triangle.
Translations, reflections and rotations do not fundamentally change a shape. Terms for shapes that undergo any of these transformations are covered in the next section.
Congruence and Similarity
Intuitively, congruent shapes are shapes that are exactly the same. Technically speaking, two shapes are congruent if you can translate, rotate and/or reflect one of them in such a way that it coincides exactly with the other shape. Hence, a shape may be translated, reflected or rotated and remain congruent to its counterpart.
These two triangles above are congruent, even though they are rotations of each other.
Similar shapes are shapes that, when scaled, are exactly the same. A shape may be translated, reflected or rotated and remain similar to its counterpart. In a sense, similar shapes are scale models of each other, that is, they are proportional.
These triangles are similar because when one is scaled down, reflected, and rotated, it becomes congruent with the other.
- Congruent Shapes - Shapes that coincide exactly when translated, reflected, and/or rotated.
- Similar Shapes - Shapes that coincide exactly when translated, reflected, rotated and/or scaled.
1) On graph paper, draw a triangle with the points (0, 0), (0, 15) and (15, 0). Draw the triangle reflected over the Y axis (the vertical axis). Draw the triangle translated up 25 units.
2) On graph paper, draw a square with the points (0, 0), (0, 15), (15, 15) and (15, 0). When you scale this shape by any amount, what happens to the point at (0, 0)?
3)On graph paper, draw a square with the points (0, 0), (0, 6), (6, 6) and (6, 0). When you scale this shape by 2, what happens to the point at (0, 6)?
- Geometry Main Page
- Geometry/Chapter 1 Definitions and Reasoning (Introduction)
- Geometry/Chapter 2 Proofs
- Geometry/Chapter 3 Logical Arguments
- Geometry/Chapter 4 Congruence and Similarity
- Geometry/Chapter 5 Triangle: Congruence and Similiarity
- Geometry/Chapter 6 Triangle: Inequality Theorem
- Geometry/Chapter 7 Parallel Lines, Quadrilaterals, and Circles
- Geometry/Chapter 8 Perimeters, Areas, Volumes
- Geometry/Chapter 9 Prisms, Pyramids, Spheres
- Geometry/Chapter 10 Polygons
- Geometry/Chapter 11
- Geometry/Chapter 12 Angles: Interior and Exterior
- Geometry/Chapter 13 Angles: Complementary, Supplementary, Vertical
- Geometry/Chapter 14 Pythagorean Theorem: Proof
- Geometry/Chapter 15 Pythagorean Theorem: Distance and Triangles
- Geometry/Chapter 16 Constructions
- Geometry/Chapter 17 Coordinate Geometry
- Geometry/Chapter 18 Trigonometry
- Geometry/Chapter 19 Trigonometry: Solving Triangles
- Geometry/Chapter 20 Special Right Triangles
- Geometry/Chapter 21 Chords, Secants, Tangents, Inscribed Angles, Circumscribed Angles
- Geometry/Chapter 22 Rigid Motion
- Geometry/Appendix A Formulas
- Geometry/Appendix B Answers to problems
- Appendix C. Geometry/Postulates & Definitions
- Appendix D. Geometry/The SMSG Postulates for Euclidean Geometry | http://en.wikibooks.org/wiki/Geometry/Chapter_4 | 13 |
139 | The mass density or density of a material is its mass per unit volume. The symbol most often used for density is ρ (the lower case Greek letter rho). Mathematically, density is defined as mass divided by volume:
where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is also defined as its weight per unit volume, although this quantity is more properly called specific weight.
Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser.
Less dense fluids float on more dense fluids if they do not mix. This concept can be extended, with some care, to less dense solids floating on more dense fluids. If the average density (including any air below the waterline) of an object is less than water it will float in water and if it is more than water it will sink in water.
Density is sometimes expressed by the dimensionless quantity "specific gravity" or "relative density", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a specific gravity less than one means that the substance floats in water.
The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid. This causes it to rise relative to more dense unheated material.
The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass.
In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass; but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (Εύρηκα! Greek "I have found it"). As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment.
The story first appeared in written form in Vitruvius' books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time.
From the equation for density (ρ = m / V), mass density has units of mass divided by volume. As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m3) and the cgs unit of gram per cubic centimetre (g/cm3) are probably the most commonly used units for density. (The cubic centimeter can be alternately called a millilitre or a cc.) 1,000 kg/m3 equals one g/cm3. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density.
Measurement of density
The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance; the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer or dasymeter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object.
If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: ρ(r) = dm/dV, where dV is an elementary volume at position r. The mass of the body then can be expressed as
The density of granular material can be ambiguous, depending on exactly how its volume is defined, and this may cause confusion in measurement. A common example is sand: if it is gently poured into a container, the density will be low; if the same sand is then compacted, it will occupy less volume and consequently exhibit a greater density. This is because sand, like all powders and granular solids, contains a lot of air space in between individual grains. The density of the material including the air spaces is the bulk density, which differs significantly from the density of an individual grain of sand with no air included.
Changes of density
In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C; similar behavior is observed in silicon at low temperatures.
The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10−6 bar−1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10−5 K−1. This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius.
In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is
where M is the molar mass, P is the pressure, R is the universal gas constant, and T is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature.
In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is :
where is the density at a reference temperature, is the thermal expansion coefficient of the material at temperatures close to .
Density of solutions
Mass (massic) concentration of each given component ρi in a solution sums to density of the solution.
Expressed as a function of the densities of pure components of the mixture and their volume participation, it reads:
provided that there is no interaction between the components.
Density of water at 1 atm pressure:
|Temp (°C)||Density (kg/m3)|
|The values below 0 °C refer to supercooled water.|
Density of air at 1 atm pressure:
|T (°C)||ρ (kg/m3)|
|Air||1.2||At sea level|
|liquid hydrogen||70||At ~ -255°C|
|Ice||916.7||At temperature < 0°C|
|Plastics||1,175||Approx.; for polypropylene and PETE/PVC|
|Diiodomethane||3,325||liquid at room temperature|
*Air excluded when calculating density
|Interstellar medium||1×10−19||Assuming 90% H, 10% He; variable T|
|The Earth||5,515||Mean density.|
|The Inner Core of the Earth||13,000||Approx., as listed in Earth.|
|The core of the Sun||33,000–160,000||Approx.|
|Super-massive Black hole||9×105||Density of a 4.5-million-solar-mass black hole
Event horizon radius is 13.5 million km.
|White dwarf star||2.1×109||Approx.|
|Atomic nuclei||2.3×1017||Does not depend strongly on size of nucleus|
|Stellar-Mass Black hole||1×1018||Density of a 4-solar-mass black hole
Event horizon radius is 12 km.
Other common units
The SI unit for density is:
Litres and metric tons are not part of the SI, but are acceptable for use with it, leading to the following units:
Densities using the following metric units all have exactly the same numerical value, one thousandth of the value in (kg/m3). Liquid water has a density of about 1 kg/dm3, making any of these SI units numerically convenient to use as most solids and liquids have densities between 0.1 and 20 kg/dm3.
- kilograms per cubic decimetre (kg/dm3)
- grams per cubic centimetre (g/cc, gm/cc or g/cm3)
- 1 gram/cm3 = 1000 kg/m3
- megagrams (metric tons) per cubic metre (Mg/m3)
In US customary units density can be stated in:
- Avoirdupois ounces per cubic inch (oz/cu in)
- Avoirdupois pounds per cubic inch (lb/cu in)
- pounds per cubic foot (lb/cu ft)
- pounds per cubic yard (lb/cu yd)
- pounds per US liquid gallon (lb/gal)
- pounds per US bushel (lb/bu)
- slugs per cubic foot
Imperial units differing from the above (as the Imperial gallon and bushel differ from the US units) in practice are rarely used, though found in older documents. The density of precious metals could conceivably be based on Troy ounces and pounds, a possible cause of confusion.
- The National Aeronautic and Atmospheric Administration's Glenn Research Center. "Gas Density Glenn research Center". grc.nasa.gov.
- "Density definition in Oil Gas Glossary". Oilgasglossary.com. Retrieved 2010-09-14.
- Archimedes, A Gold Thief and Buoyancy – by Larry "Harris" Taylor, Ph.D.
- Vitruvius on Architecture, Book IX, paragraphs 9–12, translated into English and in the original Latin.
- "EXHIBIT: The First Eureka Moment". Science 305 (5688): 1219e. 2004. doi:10.1126/science.305.5688.1219e.
- Fact or Fiction?: Archimedes Coined the Term "Eureka!" in the Bath, Scientific American, December 2006.
- New carbon nanotube struructure aerographite is lightest material champ. Phys.org (2012-07-13). Retrieved on 2012-07-14.
- Aerographit: Leichtestes Material der Welt entwickelt – SPIEGEL ONLINE. Spiegel.de (2012-07-11). Retrieved on 2012-07-14.
- "Re: which is more bouyant [sic] styrofoam or cork". Madsci.org. Retrieved 2010-09-14.
- "Wood Densities". www.engineeringtoolbox.com. Retrieved October 15, 2012.
- "Density of Wood". www.simetric.co.uk. Retrieved October 15, 2012.
- CRC Press Handbook of tables for Applied Engineering Science, 2nd Edition, 1976, Table 1-59
- glycerol composition at. Physics.nist.gov. Retrieved on 2012-07-14.
- Density of the Earth, wolframalpha.com
- Density of Earth's core, wolframalpha.com
- Density of the Sun's core, wolframalpha.com
- Extreme Stars: White Dwarfs & Neutron Stars, Jennifer Johnson, lecture notes, Astronomy 162, Ohio State University. Accessed: May 3, 2007.
- Nuclear Size and Density, HyperPhysics, Georgia State University. Accessed: June 26, 2009.
- Video: Density Experiment with Oil and Alcohol
- Video: Density Experiment with Whiskey and Water
- Glass Density Calculation – Calculation of the density of glass at room temperature and of glass melts at 1000 – 1400°C
- List of Elements of the Periodic Table – Sorted by Density
- Calculation of saturated liquid densities for some components
- Field density test
- On-line calculator for densities and partial molar volumes of aqueous solutions of some common electrolytes and their mixtures, at temperatures up to 323.15 K.
- Water – Density and specific weight
- Temperature dependence of the density of water – Conversions of density units
- A delicious density experiment
- Water density calculator Water density for a given salinity and temperature.
- Liquid density calculator Select a liquid from the list and calculate density as a function of temperature.
- Gas density calculator Calculate density of a gas for as a function of temperature and pressure.
- Densities of various materials.
- Determination of Density of Solid, instructions for performing classroom experiment. | http://en.wikipedia.org/wiki/Density | 13 |
62 | Before talking about limits in Calculus, one must be familiar with few basic topics of calculus like Functions, range and Domain. These are very important to understand the concept of Math because these are the basics requirement for studying calculus limits. So in calculus you can say that a function’s behavior is called limit of that function.
Now, let’s see what Mathematical Expression of limit is. General notation of Limits in Calculus is given as:
lim x→c f(x) =L,
Where, L the limit of f(x) as x approaches c if f(x) becomes close to L when x is closed to c. If there is no other value of L in the same condition
lim x→c f(x) =L,
f(x) → L as x → c,
The definition of a limit is not concerned with value of f(x) when, x=c. So, we care about the values of f(x) when x is close to c, on either the left side or right side.
Now, we move on to rules regarding limits. Limits have some rules which are useful when we solve different limit problems.
First Rule: First rule is called as the constant rule. In this rule we state- if we have f(x) =b (where f is constant for all x) then, the limits as x approaches c must be equal to b.
It means limit of function appears as shown below:
If b and c are constant then, our limit is lim x→c b =b.
Second Rule: Second rule is called the identity rule.
If f(x) =x,
Then, the limit of f as x approaches c is equal to c.
According to this, the function looks like:
If c is a constant then lim x→c x =c.
Here are some Operational Identities for limits, which are given below:
Suppose we have two limit Functions which are
lim x→c f(x) =L and lim x→c g(x) =M and k is constant value of function
Then the limit is:
lim x→c kf(x) = k. lim x→c f(x) =kL
lim x→c [f(x) +g(x)] = lim x→c f(x) + lim x→c g(x) =L+M
lim x→c [f(x) -g(x)] = lim x→c f(x) - lim x→c g(x) =L-M
lim x→c [f(x)g(x)] = lim x→c f(x) lim x→c g(x) =LM
lim x→c [f(x) /g(x)] = lim x→c f(x) / lim x→c g(x) =L/M (where M is not equal to zero .)
Let’s see the examples based on formulas.
Example: Find the limit of this given function
lim x→2 3x4?
Solution: First, we have to specify the problem as we have no rule of the expression by this function. We know the identity rule from above that lim x→2 x =2
By this rule,
lim x→2 x4 =( lim x→2x )4 = 24 =16,
By Scalar multiplication rule we get the function as:
lim x→2 3x4 =3 lim x→2 x4 = 3 x 16 =48.
Example: Solve the limit function where,
lim x→2 x3 + x2 +1?
Solution: For solving this limit function we have to follow the expression given below.
Step 1: In first step we write the given limit function,
lim x→2 x3 + x2 +1,
Step 2 : In second step we have solved particular values,
lim x→2 x3 + lim x→2 x2 +lim x→2 1,
lim x→2(2)3 + lim x→2 (2)2 +1,
8 + 4 +1 = 13,
Now we get the value of limit of function is 13.
Let’s see example of limit in simple approach to infinity,
As, x=1, y=3,
And so on,
Now, x and y both approaches to infinity.
Let's have one more example in order to understand this,
2x2 will always tend towards infinity and -5x always tends towards minus infinity if, 'x' will increase where will the function tends?
It will always depend on the value of if x2 will grow more rapidly with respect to x as x increases then the function will surely tend towards the positive infinity
Now, let’s talk about the degree of the function, it can be defined as the highest power of variable for example:
In this x2 has highest power as two so degree of the function will be 2. The degree of the function can be negative or positive. If degree of the function, is greater than 0 then, limit will always be positive. If degree of the function is less than 0 then the limit will be 0.
The limit of a function is an interesting but a little bit complex concept where, it may be possible that we try to find the value in the neighborhood of the Point of a function because the value of the f...Read More
Limits play a very important role in Calculus. Definition of Limit says that it is basically an approaching value for any function. The limit value shows the value of a function, that can have its exact value between the values achieved by it after putting the variables limit values. The concept of limit is used when we don’t get any appropriate value f...Read More
In Calculus, limit of a function is an important concept and it is use to know the behavior of the function near any definite input. Limits have many applications in the mathematics. A limit tells how a f...Read More | http://www.tutorcircle.com/limits-in-calculus-txOp.html | 13 |
121 | The complex numbers are a set of numbers which have important applications in the analysis of periodic, oscillatory, or wavelike phenomena. They are best known for providing square roots of negative numbers, such as . Mathematicians denote the set of complex numbers with an ornate capital letter: . They are the 5th item in this hierarchy of types of numbers:
- The "natural numbers"—, 0, 1, 2, 3, ... (There is controversy about whether zero should be included. It doesn't matter.)
- The "integers"—, positive, negative, and zero
- The "rational numbers"—, or fractions, like 355/113
- The "real numbers"—, including irrational numbers
- The "complex numbers"—, which give solutions to polynomial equations
A complex number is composed of two parts, or components—a real component and an imaginary component. Each of these components is an ordinary (that is, real) number. The complex numbers form an "extension" of the real numbers: If the imaginary component of a complex number is zero, that number is exactly equal to the real number that is its real component.
The complex numbers are defined as a 2-dimensional vector space over the real numbers. That is, a complex number is an ordered pair of numbers: (a, b). The familiar real numbers constitute the complex numbers with second component zero. That is, x corresponds to (x, 0).
The second component is called the imaginary part. Its unit basis vector is called . The first component is called the real part. Its unit basis vector is just 1. Thus, the complex number can also be written . Numbers with real part of zero are sometimes called "pure imaginary", with the term "complex" sometimes reserved for numbers with both components nonzero.
While the "invisible" nature of the imaginary component may be disconcerting at first (and the word "imaginary" may be an unfortunate term for it), the complex numbers are just as genuine as the Dedekind cuts and Cauchy sequences that are used in the definition of the "real" numbers.
The complex numbers form a field, with the mathematical operations defined as shown below.
Arithmetic operations
The field operations are defined as follows. These operations are completely compatible with the corresponding operations on real numbers, when the complex numbers involved happen to be real.
Addition is just the standard addition on the 2-dimensional vector space. That is, add the real parts (first components), and add the imaginary parts (second components).
Letting the complex numbers w and z be defined by their respective components:
Multiplication has a special definition. It is this definition that gives the complex numbers their important properties.
- When we multiply , this definition gives
Writing out the product in the obvious way, we get the same answer:
This means that, using , one can perform arithmetic operations in a completely natural way.
Division requires a special trick. We have:
To get the individual components, the denominator needs to be real. This can be accomplished by multiplying both numerator and denominator by . We get:
Due to the miracle of the multiplication by , all of the manipulations in the preceding line involve familiar ("real") arithmetic. The trick involved making the number from . This operation is known as the complex conjugate, discussed in more detail below. The act of multiplying both numerator and denominator by the conjugate of the denominator is analogous to the operation of "rationalizing the denominator" in ordinary algebra.
This division will fail if and only if , that is, and are both zero, that is, the complex denominator is exactly zero (both components zero). This is exactly analogous to the rule that real division fails if the denominator is exactly zero.
"Scalar Multiplication"
When we think of the complex numbers as a 2-dimensional vector space over the reals, the question arises: what is the meaning of "scalar multiplication" on this vector space? That is, what happens if we multiply a complex number, represented as a 2-dimensional vector, by a real number. It isn't hard to see that this is nothing but complex multiplication, in which the given "scalar" has been turned into a complex number by setting its imaginary part to zero.
However, after one has developed a familiarity with complex numbers and their operations, it is best not to think of the set of complex numbers as a 2-dimensional vector space. This is because mathematicians often use vector spaces in which the underlying "scalar field", (normally the reals) is in fact the complex numbers. That is, one can have a 3-dimensional complex vector space, so that the standard "scalar multiplication" operation on that space actually involves 3 complex multiplications, one by each component of the vector. The theory of Hermitian and unitary operators applies to complex vector spaces.
How to Perform Operations on Complex Numbers
Performing complex operations is quite easy. Just do ordinary algebraic operations, treating "i" as though it were any other variable. For example, we know from elementary algebra that we can work out the addition:
- This works for "i" as well:
- Similarly for multiplication:
- When we use the rule , we can remove powers of greater than 1, so
- For division:
Any power of i higher than 1 can be reduced by using the fact that . For example,
The Fundamental theorem of algebra
The complex numbers form an algebraically closed field. This means that any degree polynomial can be factored into n first degree (linear) polynomials. Equivalently, such a polynomial has n roots (though one has to count all multiple occurrences of repeated roots.) This statement is the Fundamental Theorem of Algebra, first proved by Carl Friedrich Gauss around 1800. The theorem is not true if the roots are required to be real. (This failure is what led to the development of complex numbers in the first place.) But when the roots are allowed to be complex, the theorem applies even to polynomials with complex coefficients.
The simplest polynomial with no real roots is , since -1 has no real square root. But if we look for roots of the form , we have:
For the imaginary part to be zero, one of a or b must be zero. If a is not zero, then b must be zero, and there is no solution. So a must be zero and must be 1. This means that , so the roots are and , or .
These two numbers, and , are the square roots of -1.
Similar analysis shows that, for example, 1 has three cube roots:
One can verify that
For a given field, the field containing it (strictly speaking, the smallest such field) that is algebraically closed is called its algebraic closure. The field of real numbers is not algebraically closed; its closure is the field of complex numbers.
Polar coordinates, modulus, and phase
Complex numbers are often depicted in 2-dimensional Cartesian analytic geometry; this is called the complex plane. The real part is the x-coordinate, and the imaginary part is the y-coordinate. When these points are analyzed in polar coordinates, some very interesting properties become apparent. The representation of complex numbers in this way is called an Argand diagram.
If a complex number is represented as , its polar coordinates are given by:
Transforming the other way, we have:
The radial distance from the origin, , is called the modulus. It is the complex equivalent of the absolute value for real numbers. It is zero if and only if the complex number is zero.
The angle, , is called the phase. (Some older books refer to it as the argument.) It is zero for positive real numbers, and radians (180 degrees) for negative ones.
The multiplication of complex numbers takes a particularly interesting and useful form when represented this way. If two complex numbers and are represented in modulus/phase form, we have:
But that is just the addition rule for sines and cosines!
So the rule for multiplying complex numbers on the Argand diagram is just:
- Multiply the moduli.
- Add the phases.
One can use this property to find all of the nth roots of a number geometrically. For example, by using these trigonometric formulas:
The three cube roots of 1, listed above, can all be seen to have moduli of 1 and phases of 0 degrees, 120 degrees, and 240 degrees respectively. When raised to the third power, the phases are tripled, obtaining 0, 360, and 720 degrees. But they are all the same angle—zero. So the cubes of these numbers are all just one. Similarly, and have phases of 90 degrees and 270 degrees. When those numbers are squared, the phases are 180 and 540, both of which are the same angle—180 degrees. So their squares are both -1.
One can apply this property to the nth roots of any complex number. They lie equally spaced on a circle. This is DeMoivre's theorem.
Euler's formula
A careful analysis of the power series for the exponential, sine, and cosine functions reveals the marvelous Euler formula:
of which there is the famous case (for θ = π):
The complex conjugate, or just conjugate, of a complex number is the result of negating its imaginary part. The conjugate is written with a bar over the quantity: . All real numbers are their own conjugates.
All arithmetic operations work naturally with conjugates—the sum of the conjugates is the conjugate of the sum, and so on.
It follows that, if P is a polynomial with real coefficients (so that its coefficients are their own conjugates)
If is a root of a real polynomial, then, since zero is its own conjugate, is also a root. This is often expressed as "Non-real roots of real polynomials come in conjugate pairs." We saw that above for the cube roots of 1—two of the roots are complex and are conjugates of each other. The third root is its own conjugate.
Transcendental functions
The higher mathematical functions (often called "transcendental functions"), like exponential, log, sine, cosine, etc., can be defined in terms of power series (Taylor series). They can be extended to handle complex arguments in the completely natural way, so these functions are defined over the complex plane. They are in fact "complex analytic functions". Just about any normal function one can think of can be extended to the complex numbers, and is complex analytic. Since the power series coefficients of the common functions are real, they work naturally with conjugates. For example:
Complex functions
The general study of functions that take a complex argument and return a complex result is an extremely rich and useful area of mathematics, known as complex analysis. When such a function is differentiable, it is called a complex analytic function, or just analytic function.
What Does "Algebraically Closed" Mean?
The complex numbers were invented in order to solve the problem of finding roots of polynomials that have real coefficients. But how about polynomials with coefficients that are themselves complex? Is the set of complex numbers big enough to contain the roots of those? One could imagine a situation in which the answer would be "no", and we might have to invent some new set, say "hypercomplex numbers". And, to get roots of polynomials that have hypercomplex coefficients, we might have to invent "superdupercomplex numbers", and so on.
That unpleasant scenario didn't happen. The complex numbers are sufficient for roots of all polynomials, including those with complex coefficients. We say that the complex numbers are closed under the operation of finding roots of polynomials, or just algebraically closed.
Mathematicians call a set "closed" under some operation if that operation, applied to elements of the set, yields a result also in the set. So we can say that:
- The natural numbers are closed under addition and multiplication.
- The integers are also closed under subtraction.
- The rationals are also closed under division (by a nonzero divisor.)
- The reals are also closed under the operation of finding least upper bounds, or the operation of finding limits of convergent Cauchy sequences.
- The complex numbers are also closed under the operation of finding roots of polynomials.
It may seem improbable that an artificial construction like this, which does not arise in ordinary measurements, has extensive applications. But complex numbers in fact are extremely important in many areas of pure and applied mathematics and physics. Basically, any description of oscillatory phenomena can benefit from a formulation in terms of complex numbers.
The applications include these:
- Theoretical mathematics:
- algebra (including finding roots of polynomials)
- linear algebra—vector spaces, inner products, Hermitian and unitary operators, Hilbert spaces, etc.
- eigenvalue/eigenvector problems
- number theory—This seems improbable, but it is true. The Riemann zeta function, and the Riemann hypothesis, are very important in number theory. For a long time, the only proofs of the prime number theorem used complex numbers. The Wiles/Taylor proof of Fermat's Last Theorem uses modular forms, which use complex numbers.
- Applied mathematics
- eigenvalue/eigenvector problems
- Fourier and Laplace transforms
- linear differential equations
- stability theory
- Electrical engineering
- alternating current circuit analysis
- filter design
- antenna design
- Theoretical physics
- quantum mechanics
- fundamental particle physics
- electromagnetic radiation
The quadratic formula
The need for complex numbers arises from attempts to solve quadratic equations. Some of these equations, such as x2 + 1 = 0, are "impossible". (The solutions, in this case, are the "impossible" square roots of -1.)
The solutions of quadratic equations: are given by the quadratic formula:
The case b2 < 4ac is the troublesome case. But with complex numbers, the two square roots
Rectangular Form
If we let the complex number then the following information holds true:
If then . That is, is a real number, and can be called 'pure real'. Conversely, if then , and is 'pure imaginary'. We can refer specifically to the real and imaginary parts ( and ) of respectively as follows: and .
Whilst the real numbers are readily visualised by considering a straight "number line", complex numbers are best seen as being positions on a plane. This plane would have one axis considered to be the real axis, the equvalent to the x axis on the cartesian plane (where ), and the other an imaginary axis, the equvalent to the y axis on the cartesian plane (where ).
Any number that is a multiple of i is an imaginary number.
But, if , and , what does equal?
- So we say
so z is a complex number, that is a number that has a real part and an imaginary part.
Polar Notation
We have already seen the cartesian representation of complex numbers, expressed in terms of real and imaginary parts. Alternatively, we can represent a complex number by a magnitude and direction, and . In this representation a complex number z can be represented as,
This also allows one to write,
Note: To represent a complex number graphically, simply draw a vector on the X - Y plane with the X axis and the Y axis .
Complex Plane
It is useful to be able to represent complex numbers geometrically, just as we are able to represent real numbers geometrically on the number line. Moreover, we don’t need any genuinely new ideas to do this.
We know that each pair of real numbers (x, y) corresponds uniquely to a point P(x, y) in the cartesian plane.
To represent complex numbers geometrically, we use the same representation: a complex number z = x+iy is represented by the point P(x, y) in the plane, as in the following diagram:
When the plane is used to represent complex numbers in this way, it is called the Complex Plane or the Argand Diagram.
- Script error
- Script error
- Script error
See Also | http://en.wikiversity.org/wiki/Complex_numbers | 13 |
53 | Algorithms and Data Structures
What is an Algorithm?
An Algorithm itself can be described as a solution to a problem by following a set of steps. Other algorithm-based solutions have such names as:
- Recipies, e.g. how to bake a cake.
- Directions for traveling from point A to point B.
- Tutorial, how to do something.
A computer system is itself a giant algorithm that follows a straight line following executions in such a fast rate that its not experienced as an algorithm.
Pseudocode can (and should) be used to describe that particular solution. Pseudocode is simply human readable executions for the algorithm.
Growth of Functions
When we have an asymptotic upper bound of a function we use O-notation. The definition is
Thus, to prove that one must find constants c and n0 such that
The relation between the sizes of asymptotic functions is
c < log(n) < log(n)c < nk < n < nc < cn < n!
where c is a constant greater than 1, k is constant smaller than 1 and n is the input size.
log(8n) < n3
n7 < 2n
In any asymtotic function, only the most important part of the function is examined. The constants and less important parts are completely disregarded.
For instance, the functions
It is important to note that the bounding functions are just that - bounding. Unless the function is a specifically tight bound, it will generally not matter how tight the bound is.
For instance, this means that n is O(n!), as well as O(n). It is generally prefferable to use bounds that are as tight as possible.
Broadly speaking, the asymptotic notations correspond to the following comparison:
|f(n) = O(g(n))|
|f(n) = Ω(g(n))|
|f(n) = Θ(g(n))||f(n) = g(n)|
|f(n) = o(g(n))||f(n) < g(n)|
|f(n) = ω(g(n))||f(n) > g(n)|
Asymptotic Notations For Loops
Heapsort is a sorting algorithm that relies on the heap structure. The idea is that it is possible to easily store a binary tree structure in a one-dimensional array.
A binary tree is a tree where each node has 0, 1 or 2 children. The node with no parent is called the root of the tree, and the nodes with no children are called the leaves of the tree.
Navigating the tree:
If the first element of the array is referred to as element 0, then, as a general rule, the children c1 and c2 of a parent node p contained within the array can be found by the formulae:
element#(of-Child-1)=(element#(of-parent))*2 + 1
element#(of-Child-2)=(element#(of-parent))*2 + 2
Where element#(x) is the element number of the item x contained in the array.
It is crucial to understand how a computer navigates a binary tree if one is to write tree-based algorithms. Heapsort is a tree based algorithm.
The concept at work behind heaps is that a clear relationship exists between a parent node and the children nodes that belong to it. Heaps are either maxheaps or minheaps - that is, either the value the children is to be sorted by is smaller than that of their parent(maxheap), or it is larger than that of their parent(minheap). A minheap has the lowest number as it's root, maxheap has the largest number as it's root.
The heap data structure must be build and maintained when you are given a data set, if you are to run heap-structure-based algorithms on it. The most interesting algorithm, to us, is the heapsort algorithm, as most of the other algorithms only serve to give us access to basic functions such as insertion and deletion, and to make sure the heap is maintained upon completing insertion or deletion.
There are two maintenence operations: bubble up, and bubble down.
Bubble up is a reccursive method that is run on a child-node which could have an illegal property as compared to it's parent node. It will examine the child and the parent, and if the property is wrong (eg: if for a maxheap, the child is larger than the parent, or if for a minheap, the child is smaller than the parent), it will swap the child and parent, and then call the same procedure on the parent, and otherwise do nothing.
Bubble down is the opposite, but with one additional thing to take into account; bubble up only has to compare with the one parent, but bubble down has to compare with both children, to ensure that the child that becomes the new parent does not break the property between itself and it's former sibling. This is taken care of, in practice, by always swapping the parrent with the largest child, if the algorithm is implemented in a maxheap, and always with the smallest child if it's implemented in the minheap.
The principle advantage to heapsort are that it is not expensive to build storagevise, and that it always sorts in (n log(n)) time. It typically remains slower than quicksort and mergesort in the average case, though. While quicksort is often faster, some inputs can make quicksort O(n*n), so heapsort is often used in real-time systems where having it take nearly the exact same time every time for n inputs is more important than decreasing the average case sort time.
Expected time is the best-case time.
Sorting in Linear Time
|Counting sort||Θ(k + n)||?||?||Θ(k + n) (because k = O(n))||No||Yes||No|
|Radix sort||Θ(d(k + n))||?||?||?||No*||Yes||No*|
k is the maximum domain for n
b is the number of digits
* means that it depends on the internal sorting algorithm used
See Hash Table for an expanded article.
Hash tables work by indexing the given value by finding a "key" or hash value, and then addressing it at the position. Most often, you use mod (modulo) of the height (which is defined "m") of the hash table to figure out where this key should be positioned as this will never allow the value to exceed the hash table. Example:
If the length of the hash table is 17, the positions vary from 0 to 16. The function to decern the position would, normally, be h(k) = k mod 17. This example only works for integers, of course.
Functions for finding a hash adress based upon a specific key vary a lot. The basic idea is that you want a function which provides a good spread of hashes across the set of keys you want to adress.
A good spread is generally attained by using a modulus function, because the modulus function limits the maximum value of the output hash number, but several other things may factor into deciding which hash function to use. If you know that the set of keys you are operating with conform to certain general tendencies, you may want to design a hash function which suits your needs. If, for instance, all your keys are numerically even numbers, it is easy to accept that the hash function ((n / 2)modp)(where p is a prime number of a suitable size) gives a spread where the set is distributed tighter than the function (nmodp).
Similarly, the prime number you choose to mod against can be chosen differently depending upon how efficient you want the hash table to be in various areas, primarily the time of the insertions, deletes, and lookups, contra the size of the data structure.
Designing excellent hashing functions is hard to do for an inexperienced or even moderately seasoned programmer without some degree of practical testing, but designing an acceptable hashing function is luckily far easier, and can, in practice, be undertaken if one is aware of some generally applicaple guidelines. Going into detail with such guidelines seems to be beyond the scope of this exam, however.
The idea of Linear probing is to simply move the hash table to the next value, so if value 5 was taken, we add i (which has increased to 1), and if that spots taken, we increase i to 2, and so on.
i = 0, 1, ..., m - 1
k is the key
Same principle as Linear Probing, but this time, we incorporate a quadratic function to avoid having a long line around one point. For instance; 6 is taken, 7 as well as 8, 9 and 10, Linear Probing would take quite long in the long run for more values, where as quadratic functions make a bigger "hole", so to speak.
i = 0, 1, ..., m - 1
k is the key
c1 and c2 are auxiliary constants
i = 0, 1, ..., m - 1
k is the key.
h1 and h2 are auxiliary hash functions, ie.
Binary Search Trees
- For each node, y, in the left subtree of each node, x, it holds that
- For each node, y, in the right subtree of each node, x, it holds that
Pre-order (prefix) traversal
pre-order(node) print node.value if node.left != null then visit(node.left) if node.right != null then visit(node.right)
This recursive algorithm prints the values in the tree in pre-order. In pre-order, each node is visited before any of its children. Similarly, if the print statement were last, each node would be visited after all of its children, and the values would be printed in post-order. In both cases, values in the left subtree are printed before values in the right subtree.
Post-order (postfix) traversal
visit(node) if node.left != null then visit(node.left) if node.right != null then visit(node.right) print node.value
In-order (infix) traversal
in-order(node) if node.left != null then visit(node.left) print node.value if node.right != null then visit(node.right)
- Preorder (NLR) traversal yields: A, H, G, I, F, E, B, C, D
- Postorder (LRN) traversal yields: G, F, E, I, H, D, C, B, A
- In-order (LNR) traversal yields: G, H, F, I, E, A, B, D, C
- Level-order traversal yields: A, H, B, G, I, C, F, E, D
- A node is either red or black.
- The root is black.
- All leaves are black.
- Both children of every red node are black.
- All paths from any given node to its leaf nodes contain the same number of black nodes.
All operations on red-black trees takes O(lg(n)) time.
The max height of an n-sized union made with union-by-size has height: O(lg(n))
Must hold true at
- Maintenance: Before and after each loop | http://content.gpwiki.org/index.php/Algorithms_and_Data_Structures | 13 |
63 | |UBC Calculus Online Course Notes|
Let's begin by making a few observations about the functions and . From now on, you will hopefully think of these functions as the y and x coordinates of a point moving around and around a circle.
For your consideration:
- What is the radius of the circle if its x and y coordinates are given by and ?
- How many revolutions are made around the circle per unit time? (Note: a revolution occurs every multiple of )
- Which of the two functions has a "flat" portion of its graph at (remember that flat means slope zero.)
- Where does each of the functions have maximum values? What are those maximum values? (For example, has a maximum value of 1 at t=0 and where else?).
- Carefully observe the graph of . How does the slope of the curve change as t increases? Sketch a plot which would show how the slope of the curve changes as t increases. (Use only the information in the graphs.)
As have done previously, we can try to sketch the graphs of the derivatives of these two functions just using information about where the functions are increasing and decreasing. If we do that, we find pictures like this:
What do you notice about these graphs? Hopefully, the remarkable fact that the derivative of the sine function resembles the cosine function. Also the derivative of the cosine function seems related to the sine function. We will use a calculation to verify this relationship below. Later, we will see that some interesting phenomena arise because of the fact that the derivative of a trigonometric function is another trigonometric function.
The derivative of
We will begin by determining the derivative at just one point. Notice that
Now the problem is that this is a difficult limit to evaluate. The sine function is complicated to compute and here we are asking for a delicate limit: it does not work to simply substitute into the expression because it would then be undefined.
We can, however, try to understand this limit another way. This limit represents the derivative of the sine function at . We could try to zoom in on the graph of the sine function and find out what its slope is at that point.
This shows us that the derivative of the sine function is 1 at . In other words,
While we're at it, we can also compute the same quantity for the cosine function.
To determine this derivative, we can again zoom in on the graph:
In other words,
Now we are in good position to compute the derivative of the sine function. The crucial ingredient is the angle addition formula.
Here we have verified our observation that the derivative of the sine function is related to the cosine function. In fact, we have found that it is exactly equal to the cosine function.
To determine the derivative of the cosine function, we can remember that
Then it follows that
This again verifies the intuition we gained from roughly sketching the derivative above. We conclude that
Other derivativesFrom these two derivatives, we can compute the derivatives for the other trigonometric functions using our now standard tools.
The importance of radians
It is important to note that our use of radians here is crucial. If we were to measure the argument of the sine function in, say, degrees, we would find a different result. The following graphs make this clear. On the left is the graph of the sine function in radians while on the right it is graphed in degrees. Notice that the rate of change of the two graphs is much different. In calculus, we use radians because they give such a nice result for the derivative of the sine and cosine function. | http://www.ugrad.math.ubc.ca/coursedoc/math100/notes/derivative/trig2.html | 13 |
83 | Topics covered: Sketching Solutions of 2x2 Homogeneous Linear System with Constant Coefficients
Instructor/speaker: Prof. Arthur Mattuck
As a matter of fact, it plots them very accurately. But it is something you also need to learn to do yourself, as you will see when we study nonlinear equations. It is a skill. And since a couple of important mathematical ideas are involved in it, I think it is a very good thing to spend just a little time on, one lecture in fact, plus a little more on the problem set that I will give out. The last problem set that I will give out on Friday. I thought it might be a little more fun to, again, have a simple-minded model. No romance this time. We are going to have a little model of war, but I have made it sort of sublimated war. Let's take as the system, I am going to let two of those be parameters, you know, be variable, in other words.
And the other two I will keep fixed, so that you can concentrate on them better. I will take a and d to be negative 1 and negative 3. And the other ones we will leave open, so let's call this one b times y, and this other one will be c times x. I am going to model this as a fight between two states, both of which are trying to attract tourists. Let's say this is Massachusetts and this will be New Hampshire, its enemy to the North.
Both are busy advertising these days on television. People are making their summer plans. Come to New Hampshire, you know, New Hampshire has mountains and Massachusetts has quaint little fishing villages and stuff like that. So what are these numbers? Well, first of all, what do x and y represent? x and y basically are the advertising budgets for tourism, you know, the amount each state plans to spend during the year. However, I do not want zero value to mean they are not spending anything. It represents departure from the normal equilibrium. x and y represent departures --
-- from the normal amount of money they spend advertising for tourists. The normal tourist advertising budget. If they are both zero, it means that both states are spending what they normally spend in that year. If x is positive, it means that Massachusetts has decided to spend more in the hope of attracting more tourists and if negative spending less. What is the significance of these two coefficients? Those are the normal things which return you to equilibrium. In other words, if x gets bigger than normal, if Massachusetts spends more there is a certain poll to spend less because we are wasting all this money on the tourists that are not going to come when we could be spending it on education or something like that.
If x gets to be negative, the governor tries to spend less. Then all the local city Chamber of Commerce rise up and start screaming that our economy is going to go bankrupt because we won't get enough tourists and that is because you are not spending enough money. There is a push to always return it to the normal, and that is what this negative sign means. The same thing for New Hampshire. What does it mean that this is negative three and that is negative one?
It just means that the Chamber of Commerce yells three times as loudly in New Hampshire. It is more sensitive, in other words, to changes in the budget. Now, how about the other? Well, these represent the war-like features of the situation. Normally these will be positive numbers. Because when Massachusetts sees that New Hampshire has budgeted this year more than its normal amount, the natural instinct is we are fighting. This is war. This is a positive number. We have to budget more, too. And the same thing for New Hampshire. The size of these coefficients gives you the magnitude of the reaction. If they are small Massachusetts say, well, they are spending more but we don't have to follow them.
We will bucket a little bit. If it is a big number then oh, my God, heads will roll. We have to triple them and put them out of business. This is a model, in fact, for all sorts of competition. It was used for many years to model in simper times armaments races between countries. It is certainly a simple-minded model for any two companies in competition with each other if certain conditions are met.
Well, what I would like to do now is try different values of those numbers. And, in each case, show you how to sketch the solutions at different cases. And then, for each different case, we will try to interpret if it makes sense or not. My first set of numbers is, the first case is -- -- x' = -x + 2y. And y' = -3y.
Now, what does this mean? Well, this means that Massachusetts is behaving normally, but New Hampshire is a very placid state, and the governor is busy doing other things. And people say Massachusetts is spending more this year, and the Governor says, so what. The zero is the so what factor. In other words, we are not going to respond to them. We will do our own thing. What is the result of this? Is Massachusetts going to win out? What is going to be the ultimate effect on the budget? Well, what we have to do is, so the program is first let's quickly solve the equations using a standard technique. I am just going to make marks on the board and trust to the fact that you have done enough of this yourself by now that you know what the marks mean.
I am not going to label what everything is. I am just going to trust to luck. The matrix A = [-1, 2; 0, 3]. The characteristic equation, the second coefficient is the trace, which is minus 4, but you have to change its sign, so that makes it plus 4. And the constant term is the determinant, which is 3 minus 0, so that is plus 3 equals zero. This factors into lambda plus 3 times lambda plus one. And it means the roots therefore are, one root is lambda equals negative 3 and the other root is lambda equals negative 1. These are the eigenvalues. With each eigenvalue goes an eigenvector. The eigenvector is found by solving an equation for the coefficients of the eigenvector, the components of the eigenvector.
Here I used negative 1 minus negative 3, which makes 2. The first equation is 2a1 plus 2a2 is equal to zero. The second one will be, in fact, in this case simply 0a1 plus 0a2 so it won't give me any information at all. That is not what usually happens, but it is what happens in this case. What is the solution? The solution is the vector alpha equals, well, (1; -1) would be a good thing to use. That is the eigenvector, so this is the e-vector. How about lambda equals negative 1? Let's give it a little more room. If lambda is negative 1 then here I put negative 1 minus negative 1. That makes zero.
I will write in the zero because this is confusing. It is zero times a1. And the next coefficient is 2 a2, is zero. People sometimes go bananas over this, in spite of the fact that this is the easiest possible case you can get. I guess if they go bananas over it, it proves it is not all that easy, but it is easy. What now is the eigenvector that goes with this? Well, this term isn't there. It is zero.
The equation says that a2 has to be zero. And it doesn't say anything about a1, so let's make it 1. Now, out of this data, the final step is to make the general solution. What is it? (x, y) equals, well, a constant times the first normal mode. The solution constructed from the eigenvalue and the eigenvector. That is going to be (1, -1) e^(-3t). And then the other normal mode times an arbitrary constant will be (1, 0) e^(-t). The lambda is this factor which produces that, of course. Now, one way of looking at it is, first of all, get clearly in your head this is a pair of parametric equations just like what you studied in 18.02.
Let's write them out explicitly just this once. x = c1 e^(-3t) + c2 e^(-t). And what is y? y = -c1 e^(-3t). I can stop there. In some sense, all I am asking you to do is plot that curve. In the x,y-plane, plot the curve given by this pair of parametric equations. And you can choose your own values of c1, c2. For different values of c1 and c2 there will be different curves. Give me a feeling for what they all look like. Well, I think most of you will recognize you didn't have stuff like this. These weren't the kind of curves you plotted.
When you did parametric equations in 18.02, you did stuff like x = cos(t), y = sin(t). Everybody knows how to do that. A few other curves which made lines or nice things, but nothing that ever looked like that. And so the computer will plot it by actually calculating values but, of course, we will not. That is the significance of the word sketch. I am not asking you to plot carefully, but to give me some general geometric picture of what all these curves look like without doing any work.
Without doing any work. Well, that sounds promising. Okay, let's try to do it without doing any work. Where shall I begin? Hidden in this formula are four solutions that are extremely easy to plot. So begin with the four easy solutions, and then fill in the rest. Now, which are the easy solutions? The easy solutions are c1 equals plus or minus 1, c2 equals zero, or c1 equals zero, or c1 = 0, c2 equals plus or minus 1.
By choosing those four values of c1 and c2, I get simple solutions corresponding to the normal mode. If c1 is one and c2 is zero, I am talking about (1, -1) e^(-3t), and that is very easy plot. Let's start plotting them. What I am going to do is color-code them so you will be able to recognize what it is I am plotting. Let's see. What colors should we use? We will use pink and orange. This will be our pink solution and our orange solution will be this one. Let's plot the pink solution first. The pink solution corresponds to c1 = 0 and c2 = 0. Now, that solution looks like --
Let's write it in pink. No, let's not write it in pink. What is the solution? It looks like x = e^(-3t), y = -e^(-3t). Well, that's not a good way to look at it, actually. The best way to look at it is to say at t equals zero, where is it? It is at the point (1, -1). And what is it doing as t increases? Well, it keeps the direction, but travels. The amplitude, the distance from the origin keeps shrinking. As t increases, this factor, so it is the tip of this vector, except the vector is shrinking. It is still in the direction of (1, -1), but it is shrinking in length because its amplitude is shrinking according to the law e^(-3t).
In other words, this curve looks like this. At t equals zero it is over here, and it goes along this diagonal line until as t equals infinity, it gets to infinity, it reaches the origin. Of course, it never gets there. It goes slower and slower and slower in order that it may never reach the origin. What was it doing for values of t less than zero? The same thing, except it was further away. It comes in from infinity along that straight line. In other words, the eigenvector determines the line on which it travels and the eigenvalue determines which way it goes. If the eigenvalue is negative, it is approaching the origin as t increases. How about the other one? Well, if c1 = -1, then everything is the same except it is the mirror image of this one.
If c1 is negative 1, then at t equals zero it is at this point. And, once again, the same reasoning shows that it is coming into the origin as t increases. I have now two solutions, this one corresponding to c1 = 1, and the other one c2 = 0. This one corresponds to c1 equals negative 1. How about the other guy, the orange guy? Well, now c1 is zero, c2 is one, let's say.
It is the vector (1, 0), but otherwise everything is the same. I start now at the point (1, 0) at time zero. And, as t increases, I come into the origin always along that direction. And before that I came in from infinity. And, again, if c2 = 1 and if c2 = -1, I do the same thing but on the other side. That wasn't very hard. I plotted four solutions. And now I roll up my sleeves and waive my hands to try to get others. The general philosophy is the following. The general philosophy is the differential equation looks like this. It is a system of differential equations. These are continuous functions.
That means when I draw the velocity field corresponding to that system of differential equations, because their functions are continuous, as I move from one (x, y) point to another the direction of the velocity vectors change continuously. It never suddenly reverses without something like that. Now, if that changes continuously then the trajectories must change continuously, too. In other words, nearby trajectories should be doing approximately the same thing. Well, that means all the other trajectories are ones which come like that must be going also toward the origin. If I start here, probably I have to follow this one. They are all coming to the origin, but that is a little too vague. How do they come to the origin? In other words, are they coming in straight like that? Probably not. Then what are they doing?
Now we are coming to the only point in the lecture which you might find a little difficult. Try to follow what I am doing now. If you don't follow, it is not well done in the textbook, but it is very well done in the notes because I wrote them myself. Please, it is done very carefully in the notes, patiently follow through the explanation. It takes about that much space. It is one of the important ideas that your engineering professors will expect you to understand.
Anyway, I know this only from the negative one because they say to me at lunch, ruin my lunch by saying I said it to my students and got nothing but blank looks. What do you guys teach them over there? Blah, blah, blah. Maybe we ought to start teaching it ourselves. Sure. Why don't they start cutting their own hair, too? Here is the idea. Let me recopy that solution. The solution looks like c1 (1, -1) e^(-3t) + c2 (1, 0) e^(-t).
What I ask is as t goes to infinity, I feel sure that the trajectories must be coming into the origin because these guys are doing that. And, in fact, that is confirmed. As t goes to infinity, this goes to zero and that goes to zero regardless of what the c1 and c2 are. That makes it clear that this goes to zero no matter what the c1 and c2 are as t goes to infinity, but I would like to analyze it a little more carefully.
As t goes to infinity, I have the sum of two terms. And what I ask is, which term is dominant? Of these two terms, are they of equal importance, or is one more important than the other? When t is 10, for example, that is not very far on the way to infinity, but it is certainly far enough to illustrate. Well, e^(-10) is an extremely small number. The only thing smaller is e^(-30). The term that dominates, they are both small, but relatively-speaking this one is much larger because this one only has the factor e^(-10), whereas, this has the factor e^(-30), which is vanishingly small. In other words, as t goes to infinity --
Well, let's write it the other way. This is the dominant term, as t goes to infinity. Now, just the opposite is true as t goes to minus infinity. t going to minus infinity means I am backing up along these curves. As t goes to minus infinity, let's say t gets to be -100, this is e^(100), but this is e^(300), which is much, much bigger. So this is the dominant term as t goes to negative infinity.
Now what I have is the sum of two vectors. Let's first look at what happens as t goes to infinity. As t goes to infinity, I have the sum of two vectors. This one is completely negligible compared with the one on the right-hand side. In other words, for a all intents and purposes, as t goes to infinity, it is this thing that takes over. Therefore, what does the solution look like as t goes to infinity? The answer is it follows the yellow line. Now, what does it look like as it backs up? As it came in from negative infinity, what does it look like? Now, this one is a little harder to see.
This is big, but this is infinity bigger. I mean very, very much bigger, when t is a large negative number. Therefore, what I have is the sum of a very big vector. You're standing on the moon looking at the blackboard, so this is really big. This is a very big vector. This is one million meters long, and this is only 20,000 meters long. That is this guy, and that is this guy. I want the sum of those two. What does the sum look like? The answer is a sum is approximately parallel to the long guy because this is negligible. This does not mean they are next to each other. They are slightly tilted over, but not very much. In other words, as t goes to negative infinity it doesn't coincide with this vector.
The solution doesn't, but it is parallel to it. It has the same direction. I am done. It means far away from the origin, it should be parallel to the pink line. Near the origin it should turn and become more or less coincident with the orange line. And those were the solutions. That's how they look. How about down here? The same thing, like that, but then after a while they turn and join. Here, they have to turn around to join up, but they join. And that is, in a simple way, the sketches of those functions. That is how they must look. What does this say about our state?
Well, it says that the fact that the governor of New Hampshire is indifferent to what Massachusetts is doing produces ultimately harmony. Both states revert ultimately their normal advertising budgets in spite of the fact that Massachusetts is keeping an eye peeled out for the slightest misbehavior on the part of New Hampshire. Peace reins, in other words. Now you should know some names. Let's see. I will write names in purple. There are two words that are used to describe this situation. First is the word that describes the general pattern of the way these lines look. The word for that is a node. And the fact that all the trajectories end up at the origin for that one uses the word sink. This could be modified to nodal sink. That would be better. Nodal sink, let's say.
Nodal sink or, if you like to write them in the opposite order, sink node. In the same way there would be something called a source node if I reversed all the arrows. I am not going to calculate an example. Why don't I simply do it by giving you -- For example, if the matrix A produced a solution instead of that one. Suppose it looked like (1, -1) e^(3t). The eigenvalues were reversed, were now positive. And I will make the other one positive, too. c2 (1, 0) e^(t).
What would that change in the picture? The answer is essentially nothing, except the direction of the arrows. In other words, the first thing would still be (1, -1). The only difference is that now as t increases we go the other way. And here the same thing, we have still the same basic vector, the same basic orange vector, orange line, but it has now traversed the solution. We traverse it in the opposite direction.
Now, let's do the same thing about dominance, as we did before. Which term dominates as t goes to infinity? This is the dominant term. Because, as t goes to infinity, 3t is much bigger than t. This one, on the other hand, dominates as t goes to negative infinity. How now will the solutions look like? Well, as t goes to infinity, they follow the pink curve. Whereas, as t starts out from negative infinity, they follow the orange curve.
As t goes to infinity, they become parallel to the pink curve, and as t goes to negative infinity, they are very close to the origin and are following the yellow curve. This is pink and this is yellow. They look like this. Notice the picture basically is the same. It is the picture of a node. All that has happened is the arrows are reversed. And, therefore, this would be called a nodal source. The word source and sink correspond to what you learned in 18.02 and 8.02, I hope, also, or you could call it a source node. Both phrases are used, depending on how you want to use it in a sentence.
And another word for this, this would be called unstable because all of the solutions starting out from near the origin ultimately end up infinitely far away from the origin. This would be called stable. In fact, it would be called asymptotically stable. I don't like the word asymptotically, but it has become standard in the literature. And, more important, it is standard in your textbook. And I don't like to fight with a textbook. It just ends up confusing everybody, including me. That is enough for nodes. I would like to talk now about some of the other cases that can occur because they lead to completely different pictures that you should understand. Let's look at the case where our governors behave a little more badly, a little more combatively.
It is x' = -x as before, but this time a firm response by Massachusetts to any sign of increased activity by stockpiling of advertising budgets. Here let's say New Hampshire now is even worse. Five times, quintuple or whatever increase Massachusetts makes, of course they don't have an income tax, but they will manage. Minus 3y as before. Let's again calculate quickly what the characteristic equation is.
Our matrix is now negative(1, 3; 5, -3). The characteristic equation now is lambda^2 + 4 lambda - 12. And this, because I prepared very carefully, all eigenvalues are integers. And so this factors into (lambda + 6)(lambda - 2), does it not?
Yes. 6 lambda minus 2 is four lambda. Good. What do we have? Well, first of all we have our eigenvalue lambda, negative 6. And the eigenvector that goes with that is minus 1. This is negative 1 minus negative 6 which makes, shut your eyes, 5. We have 5a1 + 3a2 = 0. And the other equation, I hope it comes out to be something similar. I didn't check. I am hoping this is right. The eigenvector is, okay, you have been taught to always make one of the 1, forget about that. Just pick numbers that make it come out right. I am going to make this one 3, and then I will make this one negative 5.
As I say, I have a policy of integers only. I am a number theorist at heart. That is how I started out life anyway. There we have data from which we can make one solution. How about the other one? The other one will correspond to the eigenvalue lambda equals 2. This time the equation is negative 1 minus 2 is negative 3. It is -3a1 + 3a2 = 0. And now the eigenvector is (1, 1). Now we are ready to draw pictures. We are going to make this similar analysis, but it will go faster now because you have already had the experience of that. First of all, what is our general solution? It is going to be c1*(3, -5) e^(-6t) + c2(1, 1)e^(2t).
And then the other normal mode times an arbitrary constant will be (1, 1) e^(2t). I am going to use the same strategy. We have our two normal modes here, eigenvalue, eigenvector solutions from which, by adjusting these constants, we can get our four basic solutions. Those are going to look like, let's draw a picture here. Again, I will color-code them. Let's use pink again. The pink solution now starts at (3, -5). That is where it is when t is zero. And, because of the coefficient minus 6 up there, it is coming into the origin and looks like that.
And its mirror image, of course, does the same thing. That is when c1 is negative one. How about the orange guy? Well, when t is equal to zero, it is at (1, 1). But what is it doing after that? As t increases, it is getting further away from the origin because the sign here is positive. e^(2t) is increasing, it is not decreasing anymore, so this guy is going out. And its mirror image on the other side is doing the same thing. Now all we have to do is fill in the picture. Well, you fill it in by continuity. Your nearby trajectories must be doing what similar thing?
If I start out very near the pink guy, I should stay near the pink guy. But as I get near the origin, I am also approaching the orange guy. Well, there is no other possibility other than that. If you are further away you start turning a little sooner. I am just using an argument from continuity to say the picture must be roughly filled out this way. Maybe not exactly. In fact, there are fine points.
And I am going to ask you to do one of them on Friday for the new problem set, even before the exam, God forbid. But I want you to get a little more experience working with that linear phase portrait visual because it is, I think, one of the best ones this semester. You can learn a lot from it. Anyway, you are not done with it, but I hope you have at least looked at it by now. That is what the picture looks like. First of all, what are we going to name this?
In other words, forget about the arrows. If you just look at the general way those lines go, where have you seen this before? You saw this in 18.02. What was the topic? You were plotting contour curves of functions, were you not? What did you call contours curves that formed that pattern? A saddle point. You called this a saddle point because it was like the center of a saddle. It is like a mountain pass. Here you are going up the mountain, say, and here you are going down, the way the contour line is going down. And this is sort of a min and max point. A maximum if you go in that direction and a minimum if you go in that direction, say. Without the arrows on it, it is like a saddle point. And so the same word is used here. It is called the saddle. You don't say point in the same way you don't say a nodal point.
It is the whole picture, as it were, that is the saddle. It is a saddle. There is the saddle. This is where you sit. Now, should I call it a source or a sink? I cannot call it either because it is a sink along these lines, it is a source along those lines and along the others, it starts out looking like a sink and then turns around and starts acting like a source. The word source and sink are not used for saddle. The only word that is used is unstable because definitely it is unstable. If you start off exactly on the pink lines you do end up at the origin, but if you start anywhere else ever so close to a pink line you think you are going to the origin, but then at the last minute you are zooming off out to infinity again. This is a typical example of instability.
Only if you do the mathematically possible, but physically impossible thing of starting out exactly on the pink line, only then will you get to the origin. If you start out anywhere else, make the slightest error in measure and get off the pink line, you end off at infinity. What is the effect with our war-like governors fighting for the tourist trade willing to spend any amounts of money to match and overmatch what their competitor in the nearby state is spending?
The answer is, they all lose. Since it is mostly this section of the diagram that makes sense, what happens is they end up all spending an infinity of dollars and nobody gets any more tourists than anybody else. So this is a model of what not to do. I have one more model to show you. Maybe we better start over at this board here. Massachusetts on top. New Hampshire on the bottom. x' is going to be, that is Massachusetts, I guess as before. Let me get the numbers right.
Leave that out for a moment. y' = 2x - 3y. New Hampshire behaves normally. It is ready to respond to anything Massachusetts can put out. But by itself, it really wants to bring its budget to normal. Now, Massachusetts, we have a Mormon governor now, I guess. Imagine instead we have a Buddhist governor. A Buddhist governor reacts as follows, minus y. What does that mean? It means that when he sees New Hampshire increasing the budget, his reaction is, we will lower ours.
We will show them love. It looks suicidal, but what actually happens? Well, our little program is over. Our matrix a = (-1, -1; 2, -3). The characteristic equations is lambda^2 + 4 lambda. And now what is the other term? 3 minus negative 2 makes 5. This is not going to factor because I tried it out and I know it is not going to factor. We are going to get lambda equals, we will just use the quadratic formula, negative 4 plus or minus the square root of 16 minus 4 times 5, that is 16 minus 20 or negative 4 all divided by 2, which makes minus 2, pull out the 4, that makes it a 2, cancels this 2, minus 1 inside.
It is -2 +/- i. Complex solutions. What are we doing to do about that? Well, you should rejoice when you get this case and are asked to sketch it because, even if you calculate the complex eigenvector and from that take its real and imaginary parts of the complex solution, in fact, you will not be able easily to sketch the answer anyway. But let me show you what sort of thing you can get and then I am going to wave my hands and argue a little bit to try to indicate what it is that the solution actually looks like. You are going to get something that looks like --
A typical real solution is going to look like this. This is going to produce e^(-2t)*e^(it). e^((-2 + i)t). This will be our exponential factor which is shrinking in amplitude. This is going to give me sines and cosines. When I separate out the eigenvector into its real and imaginary parts, it is going to look something like this.
(a1, a2) cos(t), that is from the e^(it) part. Then there will be a sine term. And all that is going to be multiplied by the exponential factor e^(-2t). That is just one normal mode. It is going to be c1 times this plus c2 times something similar. It doesn't matter exactly what it is because they are all going to look the same. Namely, this is a shrinking amplitude. I am not going to worry about that. My real question is, what does this look like? In other words, as a pair of parametric equations, if x = a1 cos(t) + b1 sin(t) and y = a2 cos(t) + b2 sin(t), what does it look like?
Well, what are its characteristics? In the first place, as a curve this part of it is bounded. It stays within some large box because cosine and sine never get bigger than one and never get smaller than minus one. It is periodic. As t --> t + 2pi, it comes back to exactly the same point it was at before. We have a curve that is repeating itself periodically, it does not go off to infinity. And here is where I am waving my hands. It satisfies an equation. Those of you who like to fool around with mathematics a little bit, it is not difficult to show this, but it satisfies an equation of the form (Ax^2 + By^2 + Cxy = D.
All you have to do is figure out what the coefficients A, B, C and D should be. And the way to do it is, if you calculate the square of x you are going to get cosine squared, sine squared and a cosine sine term. You are going to get those same three terms here and the same three terms here. You just use undetermined coefficients, set up a system of simultaneous equations and you will be able to find the A, B, C and D that work.
I am looking for a curve that is bounded, keeps repeating its values and that satisfies a quadratic equation which looks like this. Well, an earlier generation would know from high school, these curves are all conic sections. The only curves that satisfy equations like that are hyperbola, parabolas, the conic sections in other words, and ellipses. Circles are a special kind of ellipses. There is a degenerate case.
A pair of lines which can be considered a degenerate hyperbola, if you want. It is as much a hyperbola as a circle, as an ellipse say. Which of these is it? Well, it must be those guys. Those are the only guys that stay bounded and repeat themselves periodically. The other guys don't do that. These are ellipses. And, therefore, what do they look like? Well, they must look like an ellipse that is trying to be an ellipse, but each time it goes around the point is pulled a little closer to the origin. It must be doing this, in other words. And such a point is called a spiral sink. Again sink because, no matter where you start, you will get a curve that spirals into the origin. Spiral is self-explanatory. And the one thing I haven't told you that you must read is how do you know that it goes around counterclockwise and not clockwise?
Read clockwise or counterclockwise. I will give you the answer in 30 seconds, not for this particular curve. That you will have to calculate. All you have to do is put in somewhere. Let's say at the point (1, 0), a single vector from the velocity field. In other words, at the point (1, 0), when x is 1 and y is 0 our vector is (-1, 2), which is the vector minus 1, 2, it goes like this. Therefore, the motion must be counterclockwise. And, by the way, what is the effect of having a Buddhist governor? Peace.
Everything spirals into the origin and everybody is left with the same advertising budget they always had. Thanks. | http://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/video-lectures/lecture-27-sketching-solutions-of-2x2-homogeneous-linear-system-with-constant-coefficients/ | 13 |
60 | A viscometer (also called viscosimeter) is an instrument used to measure the viscosity of a fluid. For liquids with viscosities which vary with flow conditions, an instrument called a rheometer is used. Viscometers only measure under one flow condition.
In general, either the fluid remains stationary and an object moves through it, or the object is stationary and the fluid moves past it. The drag caused by relative motion of the fluid and a surface is a measure of the viscosity. The flow conditions must have a sufficiently small value of Reynolds number for there to be laminar flow.
At 20.00 degrees Celsius the viscosity of water is 1.002 mPa·s and its kinematic viscosity (ratio of viscosity to density) is 1.0038 mm2/s. These values are used for calibrating certain types of viscometer.
Standard laboratory viscometers for liquids
U-tube viscometers
These devices also are known as glass capillary viscometers or Ostwald viscometers, named after Wilhelm Ostwald. Another version is the Ubbelohde viscometer, which consists of a U-shaped glass tube held vertically in a controlled temperature bath. In one arm of the U is a vertical section of precise narrow bore (the capillary). Above this is a bulb, with it is another bulb lower down on the other arm. In use, liquid is drawn into the upper bulb by suction, then allowed to flow down through the capillary into the lower bulb. Two marks (one above and one below the upper bulb) indicate a known volume. The time taken for the level of the liquid to pass between these marks is proportional to the kinematic viscosity. Most commercial units are provided with a conversion factor, or can be calibrated by a fluid of known properties. The time required for the test liquid to flow through a capillary of a known diameter of a certain factor between two marked points is measured. By multiplying the time taken by the factor of the viscometer, the kinematic viscosity is obtained.
Such viscometers are also classified as direct flow or reverse flow. Reverse flow viscometers have the reservoir above the markings and direct flow are those with the reservoir below the markings. Such classifications exists so that the level can be determined even when opaque or staining liquids are measured, otherwise the liquid will cover the markings and make it impossible to gauge the time the level passes the mark. This also allows the viscometer to have more than 1 set of marks to allow for an immediate timing of the time it takes to reach the 3rd mark, therefore yielding 2 timings and allowing for subsequent calculation of Determinability to ensure accurate results.
Falling sphere viscometers
Stokes' law is the basis of the falling sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes' law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameter are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerine as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. It includes many different oils, and polymer liquids such as solutions.
In 1851, George Gabriel Stokes derived an expression for the frictional force (also called drag force) exerted on spherical objects with very small Reynolds numbers (e.g., very small particles) in a continuous viscous fluid by changing the small fluid-mass limit of the generally unsolvable Navier-Stokes equations:
- is the frictional force,
- is the radius of the spherical object,
- is the fluid viscosity, and
- is the particle's velocity.
If the particles are falling in the viscous fluid by their own weight, then a terminal velocity, also known as the settling velocity, is reached when this frictional force combined with the buoyant force exactly balance the gravitational force. The resulting settling velocity (or terminal velocity) is given by:
- Vs is the particles' settling velocity (m/s) (vertically downwards if , upwards if ),
- is the Stokes radius of the particle (m),
- g is the gravitational acceleration (m/s2),
- ρp is the density of the particles (kg/m3),
- ρf is the density of the fluid (kg/m3), and
- is the (dynamic) fluid viscosity (Pa s).
A limiting factor on the validity of this result is the roughness of the sphere being used.
A modification of the straight falling sphere viscometer is a rolling ball viscometer which times a ball rolling down a slope whilst immersed in the test fluid. This can be further improved by using a patented V plate which increases the number of rotations to distance traveled, allowing smaller more portable devices. This type of device is also suitable for ship board use. Currently, new equipment is developed for viscosity measurements. This equipment is survismeter and not only measures viscosity only but along with viscosity, it also measures surface tension, interfacial tension, wetting coefficient with high accuracy and precision. The survismeter also measures a new parameter which is noted as friccohesity. The friccohesity establishes an interface between the cohesive forces and the frictional forces within the similar or dissimilar molecules, dispersed in desired medium.Friccohesity is intimately associated with distribution of the particles due to oscillations of the velocity components on gaining kinetic energy. Since friccohesity depicts demonstration of cohesive or potential forces and kinetic or frictional forces together and thus the particle distribution is automatically involved in the behavior of the mixtures. It is similar to melting of the ice or the solid state materials in parts because the particles which gain kinetic energy start moving in x,y,z directions with definite pressure and thus the less is the cohesive force more is the pressure exerted by the kinetically moving molecules which strike the walls. But when the molecules move on the fixed track that is noted under the capillary phenomenon within the rigid wall example is Schrödinger equation within the solid wall. Thus, the particles distribution occurs in 2 D and in such cases the friccohesity is called restricted friccohesity within boundaries.
Falling Piston Viscometer
Also known as the Norcross viscometer after its inventor, Austin Norcross. The principle of viscosity measurement in this rugged and sensitive industrial device is based on a piston and cylinder assembly. The piston is periodically raised by an air lifting mechanism, drawing the material being measured down through the clearance (gap) between the piston and the wall of the cylinder into the space which is formed below the piston as it is raised. The assembly is then typically held up for a few seconds, then allowed to fall by gravity, expelling the sample out through the same path that it entered, creating a shearing effect on the measured liquid, which makes this viscometer particularly sensitive and good for measuring certain thixotropic liquids. The time of fall is a measure of viscosity, with the clearance between the piston and inside of the cylinder forming the measuring orifice. The viscosity controller measures the time of fall (time-of-fall seconds being the measure of viscosity) and displays the resulting viscosity value. The controller can calibrate the time-of-fall value to cup seconds (known as efflux cup), Saybolt universal second (SUS) or centipoise.
Industrial use is popular due to simplicity, repeatability, low maintenance and longevity. This type of measurement is not affected by flow rate or external vibrations. The principle of operation can be adapted for many different conditions, making it ideal for process control environments.
Oscillating Piston Viscometer
Sometimes referred to as electromagnetic viscometer or EMV viscometer, was invented at Cambridge Viscosity (Formally Cambridge Applied Systems) in 1986. The sensor (see figure below) comprises a measurement chamber and magnetically influenced piston. Measurements are taken whereby a sample is first introduced into the thermally controlled measurement chamber where the piston resides. Electronics drive the piston into oscillatory motion within the measurement chamber with a controlled magnetic field. A shear stress is imposed on the liquid (or gas) due to the piston travel and the viscosity is determined by measuring the travel time of the piston. The construction parameters for the annular spacing between the piston and measurement chamber, the strength of the electromagnetic field, and the travel distance of the piston are used to calculate the viscosity according to Newton’s Law of Viscosity.
The oscillating piston viscometer technology has been adapted for small sample viscosity and micro-sample viscosity testing in laboratory applications. It has also been adapted to measure high pressure viscosity and high temperature viscosity measurements in both laboratory and process environments. The viscosity sensors have been scaled for a wide range of industrial applications such as small size viscometers for use in compressors and engines, flow-through viscometers for dip coating processes, in-line viscometers for use in refineries, and hundreds of other applications. Improvements in sensitivity from modern electronics, is stimulating a growth in oscillating piston viscometer popularity with academic laboratories exploring gas viscosity.
Vibrational viscometers
Vibrational viscometers date back to the 1950s Bendix instrument, which is of a class that operates by measuring the damping of an oscillating electromechanical resonator immersed in a fluid whose viscosity is to be determined. The resonator generally oscillates in torsion or transversely (as a cantilever beam or tuning fork). The higher the viscosity, the larger the damping imposed on the resonator. The resonator's damping may be measured by one of several methods:
- Measuring the power input necessary to keep the oscillator vibrating at a constant amplitude. The higher the viscosity, the more power is needed to maintain the amplitude of oscillation.
- Measuring the decay time of the oscillation once the excitation is switched off. The higher the viscosity, the faster the signal decays.
- Measuring the frequency of the resonator as a function of phase angle between excitation and response waveforms. The higher the viscosity, the larger the frequency change for a given phase change.
The vibrational instrument also suffers from a lack of a defined shear field, which makes it unsuited to measuring the viscosity of a fluid whose flow behaviour is not known before hand.
Vibrating viscometers are rugged industrial systems used to measure viscosity in the process condition. The active part of the sensor is a vibrating rod. The vibration amplitude varies according to the viscosity of the fluid in which the rod is immersed. These viscosity meters are suitable for measuring clogging fluid and high-viscosity fluids, including those with fibers (up to 1,000 Pa·s). Currently, many industries around the world consider these viscometers to be the most efficient system with which to measure the viscosities of a wide range of fluids; by contrast, rotational viscometers require more maintenance, are unable to measure clogging fluid, and require frequent calibration after intensive use. Vibrating viscometers have no moving parts, no weak parts and the sensitive part is very small. Even very basic or acidic fluids can be measured by adding a protective coating such as enamel, or by changing the material of the sensor to a material such as 316L stainless steel.
Rotational viscometers
Rotational viscometers use the idea that the torque required to turn an object in a fluid is a function of the viscosity of that fluid. They measure the torque required to rotate a disk or bob in a fluid at a known speed.
'Cup and bob' viscometers work by defining the exact volume of a sample which is to be sheared within a test cell; the torque required to achieve a certain rotational speed is measured and plotted. There are two classical geometries in "cup and bob" viscometers, known as either the "Couette" or "Searle" systems - distinguished by whether the cup or bob rotates. The rotating cup is preferred in some cases because it reduces the onset of Taylor vortices, but is more difficult to measure accurately.
'Cone and Plate' viscometers use a cone of very shallow angle in bare contact with a flat plate. With this system the shear rate beneath the plate is constant to a modest degree of precision and deconvolution of a flow curve; a graph of shear stress (torque) against shear rate (angular velocity) yields the viscosity in a straightforward manner.
Electromagnetically Spinning Sphere Viscometer (EMS Viscometer)
The EMS Viscometer measures the viscosity of liquids through observation of the rotation of a sphere which is driven by electromagnetic interaction: Two magnets attached to a rotor create a rotating magnetic field. The sample (3) to be measured is in a small test tube (2). Inside the tube is an aluminium sphere (4). The tube is located in a temperature controlled chamber (1) and set such that the sphere is situated in the centre of the two magnets. The rotating magnetic field induces eddy currents in the sphere. The resulting Lorentz interaction between the magnetic field and these eddy currents generate torque that rotates the sphere. The rotational speed of the sphere depends on the rotational velocity of the magnetic field, the magnitude of the magnetic field and the viscosity of the sample around the sphere. The motion of the sphere is monitored by a video camera (5) located below the cell. The torque applied to the sphere is proportional to the difference in the angular velocity of the magnetic field ΩB and the one of the sphere ΩS. There is thus a linear relationship between (ΩB−ΩS)/ΩS and the viscosity of the liquid.
This new measuring principle was developed by Sakai et al. at the University of Tokyo. The EMS viscometer distinguishes itself from other rotational viscometers by three main characteristics:
- All parts of the viscometer which come in direct contact with the sample are disposable and inexpensive.
- The measurements are performed in a sealed sample vessel.
- The EMS Viscometer requires only very small sample quantities (0.3 mL).
Stabinger viscometer
By modifying the classic Couette rotational viscometer, an accuracy comparable to that of kinematic viscosity determination is achieved. The internal cylinder in the Stabinger Viscometer is hollow and specifically lighter than the sample, thus floats freely in the sample, centered by centrifugal forces. The formerly inevitable bearing friction is thus fully avoided. The speed and torque measurement is implemented without direct contact by a rotating magnetic field and an eddy current brake. This allows for a previously unprecedented torque resolution of 50 pN·m and an exceedingly large measuring range from 0.2 to 20,000 mPa·s with a single measuring system. A built-in density measurement based on the oscillating U-tube principle allows the determination of kinematic viscosity from the measured dynamic viscosity employing the relation
The Stabinger Viscometer was presented for the first time by Anton Paar GmbH at the ACHEMA in the year 2000. The measuring principle is named after its inventor Dr. Hans Stabinger.
Bubble viscometer
Bubble viscometers are used to quickly determine kinematic viscosity of known liquids such as resins and varnishes. The time required for an air bubble to rise is directly proportional to the viscosity of the liquid, so the faster the bubble rises, the lower the viscosity. The Alphabetical Comparison Method uses 4 sets of lettered reference tubes, A5 through Z10, of known viscosity to cover a viscosity range from 0.005 to 1,000 stokes. The Direct Time Method uses a single 3-line times tube for determining the "bubble seconds", which may then be converted to stokes.
This method is considerably accurate, but the measurements can vary due to variances in buoyancy because of the changing in shape of the bubble in the tube However, this does not cause any sort of serious miscalculation.
Micro-Slit Viscometers
Viscosity measurement using flow through a slit dates back to 1838 when Mr. Jean Louis Marie Poiseuille conducted experiments to characterize the liquid flow through a pipe. He found that a viscous flow through a circular pipe requires pressure to overcome the wall shear stress. That was the birth of Hagen-Poiseuille flow equation. The slit viscometer geometry has flows analogous to the cylindrical pipe but has the additional advantage that no entrance or exit pressure drop corrections are needed. Detailed information regarding the implementation of this principal with modern MEMS and microfluidic science is further explained in a paper by RheoSense, Inc.
Generally, the slit viscosity technology offers the following advantages:
- Measures true (absolute) viscosity for both Newtonian and non-Newtonian fluids
- Enclosed system eliminates air interface and sample evaporation effects
- Measurements can be made using very small sample volumes
- Laminar flow even at high shear rates due to low Reynolds number
- Slit flow simulates real application flow conditions like drug injection or inkjetting.
Miscellaneous viscometer types
In the I.C.I "Oscar" viscometer, a sealed can of fluid was oscillated torsionally, and by clever measurement techniques it was possible to measure both viscosity and elasticity in the sample.
The Marsh funnel viscometer measures viscosity from the time (efflux time) it takes a known volume of liquid to flow from the base of a cone through a short tube. This is similar in principle to the flow cups (efflux cups) like the Ford, Zahn and Shell cups which use different shapes to the cone and various nozzle sizes. The measurements can be done according to ISO 2431, ASTM D1200 - 10 or DIN 53411.
See also
- ASTM Paint and Coatings Manual 0-8031-2060-5
- ASTM Paint and Coatings Manual 0-8031-2060-5
- Macosko, C. W. (1994). RHEOLOGY Principles, Measurement and Applications. New Jersey: Wiley-VCH. ISBN 1-56081-579-5.
- Sharma, V.; Jaishankar, A., Wang, Y. C. and McKinley, G.H. (July 2011). "Rheology of Globular Proteins, Apparent Yield Stress, High Shear Rate Viscosity and Interfacial Viscoelasticity of Bovine Serum Albumin Solutions". Soft Matter (11): 5150–5160. Bibcode:2011SMat....7.5150S. doi:10.1039/C0SM01312A.
- Pipe, C.J.; Majmudar, T.S. and McKinley, G.H. (2008). "High Shear Rate Viscometry". Rheologica Acta 47: 621–642. doi:10.1007/s00397-008-0268-1.
- British Standards Institute BS ISO/TR 3666:1998 Viscosity of water
- British Standards Institute BS 188:1977 Methods for Determination of the viscosity of liquids | http://en.wikipedia.org/wiki/Viscometry | 13 |
218 | In physics, the Coriolis effect is an apparent deflection of moving objects when they are viewed from a rotating frame of reference. It is named after Gaspard-Gustave Coriolis, a French scientist who described it in 1835, though the mathematics appeared in the tidal equations of Pierre-Simon Laplace in 1778.
This effect is caused by the Coriolis force, which appears in the equation of motion of an object in a rotating frame of reference. It is an example of a fictitious force (or pseudo force), because it does not appear when the motion is expressed in an inertial frame of reference, in which the motion of an object is explained by the real impressed forces, together with inertia. In a rotating frame, the Coriolis force, which depends on the velocity of the moving object, and centrifugal force, which does not depend on the velocity of the moving object, are needed in the equation to correctly describe the motion.
Perhaps the most commonly encountered rotating reference frame is the Earth. Freely moving objects on the surface of the Earth experience a Coriolis force, and appear to veer to the right in the northern hemisphere, and to the left in the southern. Movements of air in the atmosphere and water in the ocean are notable examples of this behavior: rather than flowing directly from areas of high pressure to low pressure, as they would on a non-rotating planet, winds and currents tend to flow to the right of this direction north of the equator, and to the left of this direction south of the equator. This effect is responsible for the rotation of large cyclones and tornados.
In non-vector terms: at a given rate of rotation of the observer, the magnitude of the Coriolis acceleration of the object is proportional to the velocity of the object and also to the sine of the angle between the direction of movement of the object and the axis of rotation.
The vector formula for the magnitude and direction the Coriolis acceleration is
where (here and below) v is the velocity of the particle in the rotating system, and Ω is the angular velocity vector which has magnitude equal to the rotation rate ω and is directed along the axis of rotation of the rotating reference frame, and the × symbol represents the cross product operator.
The equation may be multiplied by the mass of the relevant object to produce the Coriolis force:
See fictitious force for a derivation.
The Coriolis effect is the behavior added by the Coriolis acceleration. The formula implies that the Coriolis acceleration is perpendicular both to the direction of the velocity of the moving mass and to the frame's rotation axis. So in particular:
- if the velocity is parallel to the rotation axis, the Coriolis acceleration is zero
- if the velocity is straight inward to the axis, the acceleration is in the direction of local rotation
- if the velocity is straight outward from the axis, the acceleration is against the direction of local rotation
- if the velocity is in the direction of local rotation, the acceleration is outward from the axis
- if the velocity is against the direction of local rotation, the acceleration is inward to the axis
The vector cross product can be evaluated as the determinant of a matrix:
where the vectors i, j, k are unit vectors in the x, y and z directions.
Consider a location with latitude on a sphere that is rotating around the north-south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system (listing components in the order East (e), North (n) and Upward (u)) are:
When considering atmospheric or oceanic dynamics, the vertical velocity is small and the vertical component of the Coriolis acceleration is small compared to gravity. For such cases, only the horizontal (East and North) components matter. The restriction of the above to the horizontal plane is (setting vu=0):
where is called the Coriolis parameter.
By setting vn = 0, it can be seen immediately that (for positive and ) a movement due east results in an acceleration due south. Similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east — that is, standing on the horizontal plane, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right. That is:
On a merry-go-round in the night
Coriolis was shaken with fright
Despite how he walked
'Twas like he was stalked
By some fiend always pushing him right
– David Morin, Eric Zaslow, E'beth Haley, John Golden, and Nathan Salwen
As a different case, consider equatorial motion setting φ = 0°. In this case, Ω is parallel to the North or n-axis, and:
Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west.
For additional examples, see rotating spheres and dropping ball in the article on centrifugal force, and carousel in fictitious force.
The Coriolis effect exists only when using a rotating reference frame. It is mathematically deduced from the law of inertia. Hence it does not correspond to any actual acceleration or force, but only the appearance thereof from the point of view of a rotating system.
That said, a denizen of a rotating frame, such as an astronaut in a rotating space station, very probably will find the interpretation of everyday life in terms of the Coriolis force accords more simply with intuition and experience than a cerebral reinterpretation of events from an inertial standpoint. For example, nausea due to an experienced push may be more instinctively explained by Coriolis force than by the law of inertia. See also Coriolis effect (perception).
The Coriolis effect exhibited by a moving object can be interpreted as being the sum of the effects of two different causes of equal magnitude. For a mathematical formulation see fictitious force.
The first cause is the change of the velocity of an object in time. The same velocity (in an inertial frame of reference where the normal laws of physics apply) will be seen as different velocities at different times in a rotating frame of reference. The apparent acceleration is proportional to the angular velocity of the reference frame (the rate at which the coordinate axes changes direction), and to the velocity of the object. This gives a term . The minus sign arises from the traditional definition of the cross product (right hand rule), and from the sign convention for angular velocity vectors.
The second cause is change of velocity in space. Different points in a rotating frame of reference have different velocities (as seen from an inertial frame of reference). In order for an object to move in a straight line it must therefore be accelerated so that its velocity changes from point to point by the same amount as the velocities of the frame of reference. The effect is proportional to the angular velocity (which determines the relative speed of two different points in the rotating frame of reference), and the velocity of the object perpendicular to the axis of rotation (which determines how quickly it moves between those points). This also gives a term .
Corrections to common misconceptions about the Coriolis effect
- The Coriolis effect does not have a significant impact on the swirl of the flushing water of a toilet. Indeed, the direction of the swirl is mainly defined by the direction with which the water is introduced in the toilet, which has a much higher impact than the Coriolis effect
- In theory, in a perfect sink, the Coriolis effect would define the direction of the swirl, as has been proved by Ascher Shapiro in 1962. Nevertheless, any imperfection of the sink, or initial rotation of the water, can compensate for the Coriolis effect, due to its very low amplitude.
- The Coriolis effect is not a result of the curvature of the Earth, only of its rotation. (However, the value of the Coriolis parameter, , does vary with latitude, and that dependence is due to the Earth's shape.)
- Ballistic missiles and satellites appear to follow curved paths when plotted on common world maps mainly because the earth is spherical and the shortest distance between two points on the earth's surface (called a great circle) is usually not a straight line on those maps. Every two-dimensional (flat) map necessarily distorts the earth's curved (three-dimensional) surface in some way. Typically (as in the commonly used Mercator projection, for example), this distortion increases with proximity to the poles. In the northern hemisphere for example, a ballistic missile fired toward a distant target using the shortest possible route (a great circle) will appear on such maps to follow a path north of the straight line from target to destination, and then curve back toward the equator. This occurs because the latitudes, which are projected as straight horizontal lines on most world maps, are in fact circles on the surface of a sphere, which get smaller as they get closer to the pole. Being simply a consequence of the sphericity of the Earth, this would be true even if the Earth didn't rotate. The Coriolis effect is of course also present, but its effect on the plotted path is much smaller.
- The Coriolis force should not be confused with the centrifugal force given by . A rotating frame of reference will always cause a centrifugal force no matter what the object is doing (unless that body is particle-like and lies on the axis of rotation), whereas the Coriolis force requires the object to be in motion relative to the rotating frame with a velocity that is not parallel to the rotation axis. Because the centrifugal force always exists, it can be easy to confuse the two, making simple explanations of the effect of Coriolis in isolation difficult. In particular, when is tangential to a circle centered on and perpendicular to the axis of rotation, the Coriolis force is parallel to the centrifugal force. In a rotating reference frame with a rotational speed equal to that of the object, the apparent velocity of the object is zero, and there is no Coriolis force.
Cannon on turntable
Figure 1 is an animation of the classic illustration of Coriolis force. Another visualization of the Coriolis and centrifugal forces is this animation clip. Figure 3 is a graphical version.
Here is a question: given the radius of the turntable R, the rate of angular rotation ω, and the speed of the cannonball (assumed constant) v, what is the correct angle θ to aim so as to hit the target at the edge of the turntable?
The inertial frame of reference provides one way to handle the question: calculate the time to interception, which is tf = R / v. Then, the turntable revolves an angle ω tf in this time. If the cannon is pointed an angle θ = ω tf = ω R / v, then the cannonball arrives at the periphery at position number 3 at the same time as the target.
No discussion of Coriolis force can arrive at this solution as simply, so the reason to treat this problem is to demonstrate Coriolis formalism in an easily visualized situation.
The trajectory in the inertial frame (denoted A) is a straight line radial path at angle θ. The position of the cannonball in ( x, y ) coordinates at time t is:
In the turntable frame (denoted B), the x- y axes rotate at angular rate ω, so the trajectory becomes:
and three examples of this result are plotted in Figure 4.
To determine the components of acceleration, a general expression is used from the article fictitious force:
in which the term in Ω × vB is the Coriolis acceleration and the term in Ω × ( Ω × rB) is the centrifugal acceleration. The results are (let α = θ − ωt):
producing a centrifugal acceleration:
producing a Coriolis acceleration:
Figure 5 and Figure 6 show these accelerations for a particular example.
It is seen that the Coriolis acceleration not only cancels the centrifugal acceleration, but together they provide a net "centripetal," radially inward component of acceleration (that is, directed toward the center of rotation) :
and an additional component of acceleration perpendicular to rB (t):
The "centripetal" component of acceleration resembles that for circular motion at radius rB, while the perpendicular component is velocity dependent, increasing with the radial velocity v and directed to the right of the velocity. The situation could be described as a circular motion combined with an "apparent Coriolis acceleration" of 2ωv. However, this is a rough labeling: a careful designation of the true centripetal force refers to a local reference frame that employs the directions normal and tangential to the path, not coordinates referred to the axis of rotation.
These results also can be obtained directly by two time differentiations of rB (t). Agreement of the two approaches demonstrates that one could start from the general expression for fictitious acceleration above and derive the trajectories of Figure 4. However, working from the acceleration to the trajectory is more complicated than the reverse procedure used here, which, of course, is made possible in this example by knowing the answer in advance.
As a result of this analysis an important point appears: all the fictitious accelerations must be included to obtain the correct trajectory. In particular, besides the Coriolis acceleration, the centrifugal force plays an essential role. It is easy to get the impression from verbal discussions of the cannonball problem, which are focused on displaying the Coriolis effect particularly, that the Coriolis force is the only factor that must be considered; emphatically, that is not so. A turntable for which the Coriolis force is the only factor is the parabolic turntable. A somewhat more complex situation is the idealized example of flight routes over long distances, where the centrifugal force of the path and aeronautical lift are countered by gravitational attraction.
Tossed ball on a rotating carousel
Figure 7 illustrates a ball tossed from 12:00 o'clock toward the center of a counterclockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the center, while the ball-thrower rotates counterclockwise with the carousel. On the right the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed.
On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the center of the carousel (providing the ball-thrower's line of sight), and the other points from the center of the carousel to the ball. (This arrow gets shorter as the ball approaches the center.) A shifted version of the two arrows is shown dotted.
On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the center of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the center of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel.
The ball travels in the air, and there is no net force upon it. To the stationary observer the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for a counterclockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory.
Figure 8 describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. Figure 8 shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight).
On the carousel, instead of tossing the ball straight at a rail to bounce back, the ball must be thrown toward the center of the carousel and then seems to the camera to bear left from the direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection.
The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position one (1). From the inertial viewer's standpoint, positions one (1), two (2), three (3) are occupied in sequence. At position 2 the ball strikes the rail, and at position 3 the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied.
A video clip of the tossed ball and other experiments are found at youtube: coriolis effect (2-11), University of Illinois WW2010 Project (some clips repeat only a fraction of a full rotation), and youtube.
Some mathematical details
What follows are some details of calculation of the trajectories. The path from the camera's viewpoint is related to that of the stationary observer by taking into account the rotation at an angular rate ω. If we let the path in inertial coordinates be (x, y) and in rotating coordinates (x', y'), then the path from the camera's viewpoint is (see matrix multiplication):
assuming that at t = 0 s the two coordinate systems are aligned. A quarter rotation later, cos(ωt) = cos(π/2) = 0, and sin(ωt) = sin(π/2) = 1, and the transformation shows the x' -axis lies along the negative y-axis, while the y' -axis lies along the positive x-axis, as expected for clockwise rotation.
Visualization of the Coriolis effect
To demonstrate the Coriolis effect, a parabolic turntable can be used. On a flat turntable, the inertia of a co-rotating object would force it off the edge. But if the surface of the turntable has the correct parabolic bowl shape and is rotated at the correct rate, the force components shown in Figure 10 are arranged so the component of gravity tangential to the bowl surface will exactly equal the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation.
Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in Figure 11. In the left panel of Figure 11, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared to analysis or observation of elliptical motion in the inertial frame.
Because this reference frame rotates several times a minute, rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger, and so easier to observe on small time and spatial scales, than is the Coriolis acceleration caused by the rotation of the Earth.
In a manner of speaking, the Earth is analogous such a turntable. The rotation has caused the planet to settle on a spheroid shape such that the normal force, the gravitational force, and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.)
The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum.
Length scales and the Rossby number
The time, space and velocity scales are important in determining the importance of the Coriolis effect. Whether rotation is important in a system can be determined by its Rossby number, which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, f, and the length scale, L, of the motion:
The Rossby number is the ratio of centrifugal to Coriolis accelerations. A small Rossby number signifies a system which is strongly affected by Coriolis forces, and a large Rossby number signifies a system in which centrifugal forces dominate. For example, in tornadoes, the Rossby number is large, in low-pressure systems it is low and in oceanic systems it is of the order of unity. As a result, in tornadoes the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems, centrifugal force is negligible and balance is between Coriolis and pressure forces. In the oceans all three forces are comparable.
An atmospheric system moving at U = 10 m/s occupying a spatial distance of L = 1000 km, has a Rossby number of approximately 0.1. A man playing catch may throw the ball at U = 30 m/s in a garden of length L = 50 m. The Rossby number in this case would be about = 6000. Needless to say, one does not worry about which hemisphere one is in when playing catch in the garden. However, an unguided missile obeys exactly the same physics as a baseball, but may travel far enough and be in the air long enough to notice the effect of Coriolis. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the southern hemisphere landed to the left.) In fact, it was this effect that first got the attention of Coriolis himself.
Draining in bathtubs and toilets
A misconception in popular culture is that water in bathtubs or toilets always drains in one direction in the Northern Hemisphere, and in the other direction in the Southern Hemisphere as a consequence of the Coriolis effect. This idea has been perpetuated by several television programs, including an episode of The Simpsons and one of The X-Files. In addition, several science broadcasts and publications (including at least one college-level physics textbook) have made this incorrect statement.
The Rossby number can also tell us about the bathtub. If the length scale of the tub is about L = 1 m, and the water moves towards the drain at about U = 60 cm/s, then the Rossby number is about 6 000. Thus, the bathtub is, in terms of scales, much like a game of catch, and rotation is unlikely to be important. However, if the experiment is very carefully controlled to remove all other forces from the system, rotation can play a role in bathtub dynamics. An article in the British Journal of Fluid Mechanics in the 1930s describes this. The key is to put a few drops of ink into the bathtub water, and observing when the ink stops swirling, meaning the viscosity of the water has dissipated its initial vorticity (or curl; i.e. ) then, if the plug is extracted ever so slowly so as not to introduce any additional vorticity, then the tub will empty with a counterclockwise swirl in England.
Some sources that incorrectly attribute draining direction to the Coriolis force also get the direction wrong. If the Coriolis force were the dominant factor, drain vortices would spin counterclockwise in the northern hemisphere and clockwise in the southern.
In reality the Coriolis effect is a few orders of magnitude smaller than various random influences on drain direction, such as the geometry of the container and the direction in which water was initially added to it. Most toilets flush in only one direction, because the toilet water flows into the bowl at an angle. If water shot into the basin from the opposite direction, the water would spin in the opposite direction.
When the water is being drawn towards the drain, the radius of its rotation around the drain decreases, so its rate of rotation increases from the low background level to a noticeable spin in order to conserve its angular momentum (the same effect as ice skaters bringing their arms in to cause them to spin faster). As shown by Ascher Shapiro in a 1961 educational video (Vorticity, Part 1), this effect can indeed reveal the influence of the Coriolis force on drain direction, but only under carefully controlled laboratory conditions. In a large, circular, symmetrical container (ideally over 1m in diameter and conical), still water (whose motion is so little that over the course of a day, displacements are small compared to the size of the container) escaping through a very small hole, will drain in a cyclonic fashion: counterclockwise in the Northern hemisphere and clockwise in the Southern hemisphere—the same direction as the Earth rotates with respect to the corresponding poles.
Coriolis effects in meteorology
Perhaps the most important instance of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and ocean science, it is convenient to use a rotating frame of reference where the Earth is stationary. The fictitious centrifugal and Coriolis forces must then be introduced. Their relative importance is determined by the Rossby number. Tornadoes have a high Rossby number, so Coriolis forces are unimportant, and are not discussed here. As discussed next, low-pressure areas are phenomena where Coriolis forces are significant.
Flow around a low-pressure area
If a low-pressure area forms in the atmosphere, air will tend to flow in towards it, but will be deflected perpendicular to its velocity by the Coriolis acceleration. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure.
Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet fluid would flow along the straightest possible line, quickly eliminating pressure gradients. Note that the geostrophic balance is thus very different from the case of "inertial motions" (see below) which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be.
This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is counterclockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region.
An air or water mass moving with speed subject only to the Coriolis force travels in a circular trajectory called an 'inertial circle'. Since the force is directed at right angles to the motion of the particle, it will move with a constant speed, and perform a complete circle with frequency f. The magnitude of the Coriolis force also determines the radius of this circle:
On the Earth, a typical mid-latitude value for f is 10−4 s−1; hence for a typical atmospheric speed of 10 m/s the radius is 100 km, with a period of about 14 hours. In the ocean, where a typical speed is closer to 10 cm/s, the radius of an inertial circle is 1 km. These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anti-clockwise in the southern hemisphere.
If the rotating system is a parabolic turntable, then f is constant and the trajectories are exact circles. On a rotating planet, f varies with latitude and the paths of particles do not form exact circles. Since the parameter f varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude = ±90°), and increase toward the equator.
Other terrestrial effects
The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance.
Other aspects of the Coriolis effect
The practical impact of the Coriolis effect is mostly caused by the horizontal acceleration component produced by horizontal motion.
There are other components of the Coriolis effect. Eastward-traveling objects will be deflected upwards (feel lighter), while westward-traveling objects will be deflected downwards (feel heavier). This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by this effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure mean that it is generally unimportant dynamically.
In addition, objects traveling upwards or downwards will be deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect.
Coriolis effects in other areas
Coriolis flow meter
A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle, introduced in 1977 by Micro Motion Inc., involves inducing a vibration of the tube through which the fluid passes. The vibration, though it is not completely circular, provides the rotating reference frame which gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid.
In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects will therefore be present and will cause the atoms to move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels.
The Coriolis effects became important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about 120 km (75 mi).
Flies (Diptera) and moths (Lepidoptera) utilize the Coriolis effect when flying: their halteres, or antennae in the case of moths, oscillate rapidly and are used as vibrational gyroscopes. See Coriolis effect in insect stability. In this context, the Coriolis effect has nothing to do with the rotation of the Earth.
- ↑ William Menke and Dallas Abbott. 1990. Geophysical Theory. (New York, NY: Columbia University Press. ISBN 0231067925)
- ↑ Newsletter, Department of Physics and Astronomy. University of Canterbury. Retrieved December 15, 2008.
- ↑ David Morin. 2008. Introduction to classical mechanics: with problems and solutions. (Cambridge, UK: Cambridge University Press. ISBN 0521876222).
- ↑ Sheldon M. Ebenholtz, 2001. Oculomotor Systems and Perception. (Cambridge, UK: Cambridge University Press. ISBN 0521804590).
- ↑ George Mather. 2006. Foundations of perception. (Hove, UK: Taylor & Francis. ISBN 0863778356).
- ↑ Here the description "radially inward" means "toward the axis of rotation." That direction is not toward the center of curvature of the path, however, which is the direction of the true centripetal force. Hence, the quotation marks on "centripetal".
- ↑ George E. Owen. (1964) 2003. Fundamentals of Scientific Mathematics. (New York, NY: Harper & Row; Mineola, NY: Dover Publications. ISBN 0486428087).
- ↑ Morton Tavel. 2002. Contemporary Physics and the Limits of Knowledge. (New Brunswick, NJ: Rutgers University Press. ISBN 0813530776).
- ↑ James R. Odgen and M. Fogiel. 1995. High School Earth Science Tutor. (Piscataway, NJ: Research & Education Assoc. ISBN 0878919759).
- ↑ James Greig McCully. 2006. Beyond the moon: A Conversational, Common Sense Guide to Understanding the Tides. (Hackensack, NJ: World Scientific. ISBN 9812566430).
- ↑ Jerry H. Ginsberg, 2007. Engineering Dynamics. (Cambridge, UK: Cambridge University Press. ISBN 0521883032).
- ↑ When a container of fluid is rotating on a turntable, the surface of the fluid naturally assumes the correct parabolic shape. This fact may be exploited to make a parabolic turntable by using a fluid that sets after several hours, such as a synthetic resin.
- ↑ For a video of the Coriolis effect on such a parabolic surface, see Brian Fiedler School of Meteorology at the University of Oklahoma. Retrieved December 15, 2008.
- ↑ 14.0 14.1 John Marshall and R. Alan Plumb. 2007. Atmosphere, Ocean, and Climate Dynamics: An Introductory Text. (London, UK: Academic Press. ISBN 0125586914).
- ↑ Lakshmi H. Kantha and Carol Anne Clayson. 2000. Numerical Models of Oceans and Oceanic Processes. (London, UK: Academic Press. ISBN 0124340687).
- ↑ Stephen D. Butz, 2002. Science of Earth Systems. (Clifton Park, NY: Thomson Delmar Learning. ISBN 0766833917).
- ↑ 17.0 17.1 James R. Holton, 2004. An Introduction to Dynamic Meteorology. (Burlington, MA: Elsevier Academic Press. ISBN 0123540151).
- ↑ Donald E. Carlucci and Sidney S. Jacobson. 2007. Ballistics: Theory and Design of Guns and Ammunition. (Boca Raton, FL: CRC Press. ISBN 1420066188).
- ↑ Bad Coriolis. Penn State College of Earth and Mineral Sciences. Retrieved December 15, 2008.
- ↑ Who Knew? The No-Spin Zone. Berkeley Science Review. Retrieved December 15, 2008.
- ↑ Flush Bosh. Snopes. Retrieved December 15, 2008.
- ↑ Roger Graham Barry and Richard J. Chorley. 2003. Atmosphere, Weather and Climate. (London, UK: Routledge. ISBN 0415271711).
- ↑ Cloud Spirals and Outflow in Tropical Storm Katrina. Earth Observatory, NASA. Retrieved December 15, 2008.
- ↑ Antennae as Gyroscopes. Science 315(9):771.
- ↑ W.C. Wu, R.J. Wood, and R.S. Fearing. 2002. Halteres for the micromechanical flying insect. Robotics and Automation, 2002. Proceedings. ICRA '02. IEEE International Conference. 1:60-65. (ISBN 0780372727). Retrieved December 15, 2008.
Physics and meteorology
- Barry, Roger Graham, and Richard J. Chorley. 2003. Atmosphere, Weather and Climate. London, UK: Routledge. ISBN 0415271711.
- Butz, Stephen D. 2002. Science of Earth Systems. Clifton Park, NY: Thomson Delmar Learning. ISBN 0766833917.
- Carlucci, Donald E., and Sidney S. Jacobson. 2007. Ballistics: Theory and Design of Guns and Ammunition. Boca Raton, FL: CRC Press. ISBN 1420066188.
- Coriolis, G.G., 1832. Mémoire sur le principe des forces vives dans les mouvements relatifs des machines. Journal de l'école Polytechnique 13:268–302.
- Coriolis, G.G., 1835. Mémoire sur les équations du mouvement relatif des systèmes de corps. Journal de l'école Polytechnique 15:142–154. Retrieved December 15, 2008.
- Durran, D.R. 1993. Is the Coriolis force really responsible for the inertial oscillation? Bulletin of the American Meteorological Society 74:2179–2184.
- Durran, D.R., and S.K. Domonkos. 1996. An apparatus for demonstrating the inertial oscillation. Bulletin of the American Meteorological Society 77:557–559.
- Ebenholtz, Sheldon M. 2001. Oculomotor Systems and Perception. Cambridge, UK: Cambridge University Press. ISBN 0521804590.
- Ehrlich, Robert. 1990. Turning the World Inside Out and 174 Other Simple Physics Demonstrations. Princeton, NJ: Princeton University Press. ISBN 0691023956.
- Gill, A.E. 1982. Atmosphere-Ocean dynamics. New York, NY: Academic Press. ISBN 9780122835223.
- Holton, James R. 2004. An Introduction to Dynamic Meteorology. Burlington, MA: Elsevier Academic Press. ISBN 0123540151.
- Kageyama, Akira, and Mamoru Hyodo. Eulerian derivation of the Coriolis force. arxiv.org. Retrieved December 15, 2008.
- Kantha, Lakshmi H., and Carol Anne Clayson. 2000. Numerical Models of Oceans and Oceanic Processes. London, UK: Academic Press. ISBN 0124340687.
- Marion, Jerry B. 1970. Classical Dynamics of Particles and Systems. New York, NY: Academic Press. ISBN 9780124722521.
- Marshall, John, and R. Alan Plumb. 2007. Atmosphere, Ocean, and Climate Dynamics: An Introductory Text. London, UK: Academic Press. ISBN 0125586914.
- Mather, George. 2006. Foundations of perception. Hove, UK: Taylor & Francis. ISBN 0863778356.
- McCully, James Greig. 2006. Beyond the moon: A Conversational, Common Sense Guide to Understanding the Tides. Hackensack, NJ: World Scientific. ISBN 9812566430.
- Menke, William, and Dallas Abbott. 1990. Geophysical Theory. New York, NY: Columbia University Press. ISBN 0231067925.
- Morin, David. 2008. Introduction to classical mechanics: with problems and solutions. Cambridge, UK: Cambridge University Press. ISBN 0521876222.
- Owen, George E. (1964) 2003. Fundamentals of Scientific Mathematics. New York, NY: Harper & Row; Mineola, NY: Dover Publications. ISBN 0486428087.
- Persson, A. 1998. How do we Understand the Coriolis Force? Bulletin of the American Meteorological Society 79:1373–1385.
- Phillips, Norman A. 2000. An Explication of the Coriolis Effect. Bulletin of the American Meteorological Society 81(2):299–303.
- Price, James F. A Coriolis tutorial. whoi.edu. Retrieved December 15, 2008.
- Symon, Keith. 1971. Mechanics. Reading, MA: Addison-Wesley.
- Tavel, Morton. 2002. Contemporary Physics and the Limits of Knowledge. New Brunswick, NJ: Rutgers University Press. ISBN 0813530776.
- Grattan-Guinness, I., ed. 1994. Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences, Vols. I and II. London, UK; New York, NY: Routledge.
- Khrgian, A. 1970. Meteorology—A Historical Survey, Vol. 1. Jerusalem, IL: Israel Program for Scientific Translations.
- Kuhn, T.S. 1977. The Essential Tension, Selected Studies in Scientific Tradition and Change. Chicago, IL: University of Chicago Press. ISBN 9780226458052.
- Kutzbach, G. 1979. The Thermal Theory of Cyclones. A History of Meteorological Thought in the Nineteenth Century. Boston, MA: Amer. Meteor. Soc. ISBN 9780933876484.
- The definition of the Coriolis effect from the Glossary of Meteorology. Retrieved December 15, 2008.
- The Coriolis Effect PDF-file. 17 pages. A general discussion by Anders Persson of various aspects of the coriolis effect, including Foucault's Pendulum and Taylor columns. Retrieved December 15, 2008.
- Anders Persson The Coriolis Effect: Four centuries of conflict between common sense and mathematics, Part I: A history to 1885 History of Meteorology 2 (2005). Retrieved December 15, 2008.
- 10 Coriolis Effect Videos and Games- from the About.com Weather Page. Retrieved December 15, 2008.
- Coriolis Force - from ScienceWorld. Retrieved December 15, 2008.
- The Coriolis Effect: An Introduction. Details of the causes of prevailing wind patterns. Targeted towards ages 5 to 18. Retrieved December 15, 2008.
- Coriolis Effect and Drains An article from the NEWTON web site hosted by the Argonne National Laboratory. Retrieved December 15, 2008.
- Do bathtubs drain counterclockwise in the Northern Hemisphere? by Cecil Adams. Retrieved December 15, 2008.
- Bad Coriolis. An article uncovering misinformation about the Coriolis effect. By Alistair B. Fraser, Emeritus Professor of Meteorology at Pennsylvania State University. Retrieved December 15, 2008.
- Getting Around The Coriolis Force, an intuitive explanation. Retrieved December 15, 2008.
- Observe an animation of the Coriolis effect over Earth's surface. Retrieved December 15, 2008.
- Vincent Mallette The Coriolis Force @ INWIT. Retrieved December 15, 2008.
- Catalog of Coriolis videos. Retrieved December 15, 2008.
- NASA notes. Retrieved December 15, 2008.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Coriolis_effect | 13 |
52 | |Themes > Science > Chemistry > Nuclear Chemistry > Nuclear Weapons > The First Nuclear Chain Reaction > Effects of Nuclear Explosions > Mechanisms of Damage and Injury|
Thermal Damage and Incendiary Effects
Thermal damage from nuclear explosions arises from the intense thermal (heat) radiation produced by the fireball. The thermal radiation (visible and infrared light) falls on exposed surfaces and is wholly or partly absorbed. The radiation lasts from about a tenth of a second, to several seconds depending on bomb yield (it is longer for larger bombs). During that time its intensity can exceed 1000 watts/cm^2 (the maximum intensity of direct sunlight is 0.14 watts/cm^2). For a rough comparison, the effect produced is similar to direct exposure to the flame of an acetylene torch.
The heat is absorbed by the opaque surface layer of the material on which it falls, which is usually a fraction of a millimeter thick. Naturally dark materials absorb more heat than light colored or reflective ones. The heat is absorbed much faster than it can be carried down into the material through conduction, or removed by reradiation or convection, so very high temperatures are produced in this layer almost instantly. Surface temperatures can exceed 1000 degrees C close to the fireball. Such temperatures can cause dramatic changes to the material affected, but they do not penetrate in very far.
More total energy is required to inflict a given level of damage for a larger bomb than a smaller one since the heat is emitted over a longer period of time, but this is more than compensated for by the increased thermal output. The thermal damage for a larger bomb also penetrates further due to the longer exposure.
Thermal radiation damage depends very strongly on weather conditions. Cloud cover, smoke, or other obscuring material in the air can considerably reduce effective damage ranges over clear air conditions.
For all practical purposes, the emission of thermal radiation by a bomb is complete by the time the shock wave arrives. Regardless of yield, this generalization is only violated in the area of total destruction around a nuclear explosion where 100% mortality would result from any one of the three damage effects.
Incendiary effects refer to anything that contributes to the occurrence of fires after the explosion, which is a combination of the effects of thermal radiation and blast.
The result of very intense heating of skin is to cause burn injuries. The burns caused by the sudden intense thermal radiation from the fireball are called "flash burns". The more thermal radiation absorbed, the more serious the burn. The table below indicates the amount of thermal radiation required to cause different levels of injury, and the maximum ranges at which they occur, for different yields of bombs. The unit of heat used are gram-calories, equal to 4.2 joules (4.2 watts for 1 sec). Skin color significantly affects susceptibility, light skin being less prone to burns. The table assumes medium skin color. SEVERITY 20 Kilotons 1 Megaton 20 Megatons 1st Degree 2.5 cal/cm^2 (4.3 km) 3.2 cal/cm^2 (18 km) 5 cal/cm^2 (52 km) 2nd Degree 5 cal/cm^2 (3.2 km) 6 cal/cm^2 (14.4 km) 8.5 cal/cm^2 (45 km) 3rd Degree 8 cal/cm^2 (2.7 km) 10 cal/cm^2 (12 km) 12 cal/cm^2 (39 km)
Convenient scaling laws to allow
calculation of burn effects for any yield are:
First degree flash burns are not serious, no tissue destruction occurs. They are characterized by immediate pain, followed by reddening of the skin. Pain and sensitivity continues for some minutes or hours, after which the affected skin returns to normal without further incident.
Second degree burns cause damage to the underlying dermal tissue, killing some portion of it. Pain and redness is followed by blistering within a few hours as fluids collect between the epidermis and damaged tissue. Sufficient tissue remains intact however to regenerate and heal the burned area quickly, usually without scarring. Broken blisters provide possible infection sites prior to healing.
Third degree burns cause tissue death all the way through the skin, including the stem cells required to regenerate skin tissue. The only way a 3rd degree burn can heal is by skin regrowth from the edges, a slow process that usually results in scarring, unless skin grafts are used. Before healing 3rd degree burns present serious risk of infection, and can cause serious fluid loss. A 3rd degree burn over 25% of the body (or more) will typically precipitate shock in minutes, which itself requires prompt medical attention.
Even more serious burns are possible, which have been classified as fourth (even fifth) degree burns. These burns destroy tissue below the skin: muscle, connective tissue etc. They can be caused by thermal radiation exposures substantially in excess of those in the table for 3rd degree burns. Many people close to the hypocenter of the Hiroshima bomb suffered these types of burns. In the immediate vicinity of ground zero the thermal radiation exposure was 100 c/cm^2, some fifteen times the exposure required for 3rd degree burns, most of it within the first 0.3 seconds (which was the arrival time of the blast wave). This is sufficient to cause exposed flesh to flash into steam, flaying exposed body areas to the bone.
At the limit of the range for 3rd degree burns, the time lapse between suffering burns and being hit by the blast wave varies from a few seconds for low kiloton explosions to a minute of so for high megaton yields.
Despite the extreme intensity of thermal radiation, and the extraordinary surface temperatures that occur, it has less incendiary effect than might be supposed. This is mostly due to its short duration, and the shallow penetration of heat into affected materials. The extreme heating can cause pyrolysis (the charring of organic material, with the release of combustible gases), and momentary ignition, but it is rarely sufficient to cause self-sustained combustion. This occurs only with tinder-like, or dark, easily flammable materials: dry leaves, grass, old newspaper, thin dark flammable fabrics, tar paper, etc. The incendiary effect of the thermal pulse is also substantially affected by the later arrival of the blast wave, which usually blows out any flames that have already been kindled. Smoldering material can cause reignition later however.
The major incendiary effect of nuclear explosions is caused by the blast wave. Collapsed structures are much more vulnerable to fire than intact ones. The blast reduces many structures to piles of kindling, the many gaps opened in roofs and walls act as chimneys, gas lines are broken open, storage tanks for flammable materials are ruptured. The primary ignition sources appear to be flames and pilot lights in heating appliances (furnaces, water heaters, stoves, ovens, etc.). Smoldering material from the thermal pulse can be very effective at igniting leaking gas.
Although the ignition sources are probably widely scattered a number of factors promote their spread into mass fires. The complete suppression of fire fighting efforts is extremely important. Another is that the blast scatters combustible material across fire breaks that normally exist (streets, yards, fire lanes, etc.).
The effectiveness of building collapse, accompanied by the disruption of fire fighting, in creating mass fires can be seen in the San Francisco earthquake (1906), the Tokyo-Yokahama earthquake (1923), and the recent Kobe earthquake (1995). In these disasters there was no thermal radiation to ignite fires, and the scattering of combustible materials did not occur, but huge fires still resulted. In San Francisco and Tokyo-Yokohama these fires were responsible for most of the destruction that occurred.
In Hiroshima the fires developed into a true firestorm. This is an extremely intense fire that produces a rapidly rising column of hot air over the fire area, in turn powerful winds are generated which blow in to the fire area, fanning and feeding the flames. The fires continue until all combustible material is exhausted. Firestorms develop from multiple ignition sources spread over a wide area that create fires which coalesce into one large fire. Temperatures in firestorm areas can reach many hundreds of degrees, carbon monoxide reaches lethal levels, few people who see the interior of a firestorm live to tell about it. Firestorms can melt roads, cars, and glass. They can boil water in lakes and rivers, and cook people to death in buried bomb shelters. The in-blowing winds can reach gale force, but they also prevent the spread of the fires outside of the area in which the firestorm initially develops. The firestorm in Hiroshima began only about 20 minutes after the bombing.
Nagasaki did not have a firestorm, instead it had a type of mass fire called a conflagration. This is a less intense type of fire, it develops and burns more slowly. A conflagration can begin in multiple locations, or only one. Conflagrations can spread considerable distances from their origins. The fires at Nagasaki took about 2 hours to become well established, and lasted 4-5 hours.
The brightness and thermal output of a nuclear explosion presents an obvious source of injury to the eye. Injury to the cornea through surface heating, and injury to the retina are both possible risks. Surprisingly, very few cases of injury were noted in Japan. A number of factors acted to reduce the risk. First, eye injury occurs when vision is directed towards the fireball. People spend relatively little time looking up at the sky so only a very small portion of the population would have their eyes directed at the fireball at the time of burst. Second, since the bomb exploded in bright daylight the eye pupil would be expected to be small.
About 4% of the population within the 3rd degree burn zone at Hiroshima reported keratitis, pain and inflammation of the cornea, which lasted several hours to several days. No other corneal damage was noted.
The most common eye injury was flashblindness, a temporary condition in which the visual pigment of retina is bleached out by the intense light. Vision is completely recovered as the pigment is regenerated, a process that takes several seconds to several minutes. This can cause serious problems though in carrying out emergency actions, like taking cover from the oncoming blast wave.
Retinal injury is the most far reaching injury effect of nuclear explosions, but it is relatively rare since the eye must be looking directly at the detonation. Retinal injury results from burns in the area of the retina where the fireball image is focused. The brightness per unit area of a fireball does not diminish with distance (except for the effects of haze), the apparent fireball size simply gets smaller. Retinal injury can thus occur at any distance at which the fireball is visible, though the affected area of the retina gets smaller as range increases. The risk of injury is greater at night since the pupil is dialated and admits more light. For explosions in the atmosphere of 100 kt and up, the blink reflex protects the retina from much of the light.
Blast Damage and Injury
Blast damage is caused by the arrival of the shock wave created by the nuclear explosion. Shock waves travel faster than sound, and cause a virtually instantaneous jump in pressure at the shock front. The air immediately behind the shock front is accelerated to high velocities and creates a powerful wind. The wind in turn, creates dynamic pressure against the side of objects facing the blast. The combination of the pressure jump (called the overpressure)and the dynamic pressure causes blast damage.
Both the overpressure and dynamic pressure jump immediately to their peak values when the shock wave arrives. They then decay over a period ranging from a few tenths of a second to several seconds, depending on the strength of the blast and the yield. Following the this there is a longer period of weaker negative pressure before the atmospheric conditions return to normal. The negative pressure has little significance as far as causing damage or injury is concerned. A given pressure is more destructive from a larger bomb, due its longer duration.
The is a definite relationship between the overpressure and the dynamic pressure. The overpressure and dynamic pressure are equal at 70 psi, and the wind speed is 1.5 times the speed of sound. Below an overpressure of 70 psi, the dynamic pressure is less than the overpressure; above 70 psi it exceeds the overpressure. Since the relationship is fixed it is convenient to use the overpressure alone as a yardstick for measuring blast effects. At 20 psi overpressure the wind speed is still 500 mph, higher than any tornado wind.
As a general guide, city areas are completely destroyed (with massive loss of life) by overpressures of 5 psi, with heavy damage extending out at least to the 3 psi contour. The dynamic pressure is much less than the overpressure at blast intensities relevant for urban damage, although at 5 psi the wind speed is still 162 mph - close to the peak wind speeds of the most intense hurricanes.
Humans are actually quite resistant to the direct effect of overpressure. Pressures of over 40 psi are required before lethal effects are noted. This pressure resistance makes it possible for unprotected submarine crews to escape from emergency escape locks at depths as great as one hundred feet (the record for successful escape is actually an astonishing 600 feet, representing a pressure of 300 psi). Loss of eardrums can occur, but this is not a life threatening injury.
The danger from overpressure comes from the collapse of buildings that are generally not as resistant. The violent implosion of windows and walls creates a hail of deadly missiles, and the collapse of the structure above can crush or suffocate those caught inside.
The dynamic pressure causes can cause injury by hurling large numbers of objects at high speed. Urban areas contain many objects that can become airborne, and the destruction of buildings generates many more. Serious injury or death can also occur from impact after being thrown through the air.
Blast effects are most dangerous in built-up areas due to the large amounts of projectiles created, and the presence of obstacles to be hurled against.
The blast also magnifies thermal radiation burn injuries by tearing away severely burned skin. This creates raw open wounds that readily become infected.
These many different effects make it difficult to provide a simple rule of thumb for assessing the magnitude of harm produced by different blast intensities. A general guide is given below: 1 psi Window glass shatters Light injuries from fragments occur. 3 psi Residential structures collapse. Serious injuries are common, fatalities may occur. 5 psi Most buildings collapse. Injuries are universal, fatalities are widespread. 10 psi Reinforced concrete buildings are severely damaged or demolished. Most people are killed. 20 psi Heavily built concrete buildings are severely damaged or demolished. Fatalities approach 100%. Suitable scaling constants for the equation r_blast = Y^0.33 * constant_bl are: constant_bl_1_psi = 2.2 constant_bl_3_psi = 1.0 constant_bl_5_psi = 0.71 constant_bl_10_psi = 0.45 constant_bl_20_psi = 0.28 where Y is in kilotons and range is in km.
Ionizing radiation produces injury primarily through damage to the chromosomes. Since genetic material makes up a very small portion of the mass of a cell, the damage rarely occurs from the direct impact of ionizing radiation on a genetic molecule. Instead the damage is caused by the radiation breaking up other molecules and forming chemically reactive free radicals or unstable compounds. These reactive chemical species then damage DNA and disrupt cellular chemistry in other ways - producing immediate effects on active metabolic and replication processes, and long-term effects by latent damage to the genetic structure.
Cells are capable of repairing a great deal of genetic damage, but the repairs take time and the repair machinery can be overwhelmed by rapid repeated injuries. If a cell attempts to divide before sufficient repair has occurred, the cell division will fail and both cells will die. As a consequence, the tissues that are most sensitive to radiation injury are ones that are undergoing rapid division. Another result is that the effects of radiation injury depend partly on the rate of exposure. Repair mechanisms can largely offset radiation exposures that occur over a period of time. Rapid exposure to a sufficiently large radiation dose can thus cause acute radiation sickness, while a longer exposure to the same dose might cause none.
By far the most sensitive are bone marrow and lymphatic tissues - the blood and immune system forming organs of the body. Red blood cells, which provide oxygen to the body, and white blood cells, which provide immunity to infection, only last a few weeks or months in the body and so must be continually replaced. The gastrointestinal system is also sensitive, since the lining of the digestive tract undergoes constant replacement. Although they are not critical for health, hair follicles also undergo continual cell division resulting in radiation sickness' most famous symptom - hair loss. The tissues least sensitive to radiation are those that never undergo cell division (i.e. the nervous system).
This also means that children and infants are more sensitive to injury than adults, and that fetuses are most sensitive of all.
If the individual survives, most chromosome damage is eventually repaired and the symptoms of radiation illness disappear. The repair is not perfect however. Latent defects can show up years or decades later in their effects on reproductive cells, and in the form of cancer. These latent injuries are a very serious concern and can shorten life by many years. They are the sole form of harm from low level radiation exposure.
Units of Measurement for Radiation Exposure
Three units of measurement have been commonly used for expressing radiation exposure: roentgens (R), rads, rems, the "three r's" of radiation measurement. In the scientific literature these are dropping out of use in favor of the SI (System Internationale) units grays (Gy) and sieverts (Sv). Each of the "three r's" measures something different. A rad is a measure of the amount of ionizing . A roentgen measures the amount of ionizing energy, in the form of energetic photons (gamma rays and x-rays) energy to which an organism is exposed. This unit is the oldest of the three and is defined more the convenience of radiation measurement, than for interpreting the effects of radiation on living organisms. Of more interest is the rad, since it includes all forms of ionizing radiation, and in addition measures the dose that is *actually absorbed* by the organism. A rad is defined as the absorption of 100 ergs per gram of tissue (or 0.01 J/kg). The gray measures absorbed doses as well, one gray equals 100 rads. The rem is also concerned with all absorbed ionizing radiations, and also takes into account the *relative effect* that different types of radiation produce. The measure of effect for a given radiation is its Radiation Biological Effect (RBE). A rem dose is calculated by multiplying the dose in rads for each type of radiation by the appropriate RBE, then adding them all up. The sievert is similar to the rem, but is derived from the gray instead of the rad. Sieverts use a somewhat simplified system of measuring biological potency - the quality factor (Q). One sievert is roughly equal to 100 rems. The rem and the sievert are the most meaningful unit for measuring and discussing the effects of radiation injury. Type Of Radiation RBE Q Gamma rays/X-rays 1 1 Beta Particles 1 1 Alpha Particles 10-20 20 (ingested emitter) Neutrons (fast) - 10 Overall effects 1 Immediate Effect 4-6 Delayed cataract formation 10 Cancer Effect 20 Leukemia Effect
Types of Radiation Exposure
An important concept to understand is the distinction between _whole body doses_ and radiation exposures concentrated in particular organs. The radiation dose units described above are defined per unit weight of tissue. An exposure of 1000 rems can thus refer to an exposure of this intensity for the whole body, or for only a small part of it. The total absorbed radiation energy will be much less if only a small part of the body is affected, and the overall injury will be reduced.
Not all tissues are exposed equally even in whole body exposures. The body provides significant shielding to internal organs, so tissues located in the center of the body may receive doses that are only 30-50% of the nominal total body dose rate. For example there is a 50% chance of permanent female sterility if ovaries are exposed to 200 rems, but this internal exposure is only encountered with whole body doses of 400-600 rems.
Radiation exposures from nuclear weapons occur on three time scales:
The effects of radiation exposure of usually divided into acute and latent effects. Acute effects typically result from rapid exposures, the effects show up within hours to weeks after a sufficient dose is absorbed. Latent effects take years to appear, even after exposure is complete.
Since the latent effects of radiation exposure are cumulative, and there does not appear to be any threshold exposure below which no risk is incurred, radiation safety standards have been set to minimize radiation exposure over time. Current standards are: Occupational Exposure 0.3 rem/wk (whole body exposure) 1.5 rem/yr (whole body exposure for pregnant women) 5 rem/yr (whole body exposure) 15 rem/yr (eye tissue exposure) 50 rem/yr (limit for any tissue) 200 rem lifetime limit (whole body exposure) Public Exposure 0.5 rem/yr (whole body exposure) 5 rem/yr (limit for any tissue)
The occupational exposure limits are likely to be reduced soon (if they have not been already).
The normal human annual radiation exposure varies considerably with location (elevation and surface mineral composition), and medical treatment. Typical values are 0.1 rems from natural radiation and 0.08 rems from medical x-rays, for a total of 0.18 rem/yr. In the US, Colorado has one of the highest natural backgrounds (0.25 rem) since high altitudes cause greater cosmic ray exposures, and granite rock formations contain uranium series radioisotopes. If natural radioisotopes are unusually concentrated, levels as high as 0.5-12 rems/yr have been recorded (some areas of Sri Lanka, Kerala India, and Brazil). This does not count indoor radon exposure which depends heavily on building design, but can easily exceed all other exposure sources combined in regions with high soil radon levels. This source has been known to cause lung exposures in the home of 100 rem/yr (a risk factor comparable to heavy smoking)!
Prompt Radiation Emission From Nuclear Explosions
Although the subject is complex, a simplified guide to estimating the prompt radiation exposure from nuclear explosions is given here. The following scaling law can be used to determine the lethal radius with yield:
r_radiation = Y^0.19 * constant_radIf Y is in kilotons, range is in meters, and the dose standard is 1000 rads then:
constant_rad_1000 = 700 m
This can then be scaled for distance by adjusting for attenuation with range using the table below. The table lists tenth-ranges, the distance over which the dose decreases (for greater distance) or increases (for shorter distance) by a factor of 10. 1 kt 330 m 10 kt 440 m 100 kt 490 m 1 Mt 560 m 10 Mt 670 m 20 Mt 700 m So, for example to calculate the radiation dose for a 10 Mt bomb at 5000 m, we calculate: dose = (1000 rads) / 10^[(5000-[10000^0.19]*700)/670] = 35 rads
This guide assumes 100% fission yield for bombs <100 kt, and 50/50 fission/fusion for higher yields. Due to the enhanced radiation output of low-yield neutron bombs different factors need to be used: constant_rad_1000 = 620 m tenth-range 385 m Acute Radiation Sickness
This results from exposure to a large radiation dose to the whole body within a short period of time (no more than a few weeks). There is no sharp cutoff to distinguish acute exposures from chronic (extended) ones. In general, higher total doses are required to produce a given level of acute sickness for longer exposure times. Exposures received over a few days do not differ substantially from instantaneous ones, except that the onset of symptoms is correspondingly delayed or stretched out. Nuclear weapons can cause acute radiation sickness either from prompt exposure at the time of detonation, or from the intense radiation emitted by early fallout in the first few days afterward.
The effects of increasing exposures are described below. A notable characteristic of increasing doses is the non-linear nature of the effects. That is to say, a threshold exists below which observable effects are slight and reversible (about 300 rems), but as exposures rise above this level the possibility of mortality (death) begins and increases rapidly with dose. This is believed to be due in part to the saturation of cellular repair mechanisms.
The total energy absorbed by a 75 kg individual with a whole body exposure of 600 rads (fatal in most cases) is 450 joules. It is interesting to compare this to the kinetic energy of a .45 caliber bullet, which is about 900 joules.
A power law for scaling radiation effects for longer term exposures has been proposed in which the dose required for a given effect increases by t^0.26, where time is in weeks. For exposures of one week or less the effect of rem of radiation is assumed to be constant. Thus an exposure capable of causing 50% mortality is 450 rems if absorbed in a week or less, but is 1260 rems if it occurs over a year.
Acute Whole Body Exposure Effects
Below 100 REMS
Above 1000 REMS
In the range 1000-5000 rems the onset time drops from 30 minutes to 5 minutes. Following an initial bout of severe nausea and weakness, a period of apparent well-being lasting a few hours to a few days may follow (called the "walking ghost" phase). This is followed by the terminal phase which lasts 2-10 days. In rapid succession prostration, diarrhea, anorexia, and fever follow. Death is certain, often preceded by delirium and coma. Therapy is only to relieve suffering.
Above 5000 rems metabolic disruption is severe enough to interfere with the nervous system. Immediate disorientation and coma will result, onset is within seconds to minutes. Convulsions occur which may be controlled with sedation. Victim may linger for up to 48 hours before dying.
The U.S. military assumes that 8000 rads of fast neutron radiation (from a neutron bomb) will immediately and permanently incapacitate a soldier.
It should be noted that people exposed to radiation doses in the 400-1000 rem range following the Chernobyl disaster had much higher rates of survival than indicated above. This was made possible by advances in bone marrow transfusions and intensive medical care, provided in part by Dr. Robert Gale. However two caveats apply: Such care is only available if the number of cases is relatively small, and the infrastructure for providing it is not disrupted. In the case of even a limited nuclear attack it would be impossible to provide more than basic first aid to most people and the fatality rates might actually be higher than given here. Many of the highly exposed Chernobyl survivors have since died from latent radiation effects.
Acute Localized Tissue Exposure
Localized acute exposure is important for two organs: the skin, and the thyroid gland.
The initial symptom for beta burns are an itching or burning sensation during the first 24-48 hours. These symptoms are marked only if the exposure is intense, and do not occur reliably. Within 1-2 days all symptoms disappear, but after 2-3 weeks the burn symptoms appear. The first evidence is increased pigmentation, or possibly erythema (reddening). Epilation and skin lesions follow.
In mild to moderate cases damage is largely confined to the epidermis (outer skin layers). After forming a dry scab, the superficial lesions heal rapidly leaving a central depigmented area, surrounded by an irregular zone of increased pigmentation. Normal pigmentation returns over a few weeks.
In more serious cases deeper ulcerated lesions form. These lesions ooze before becoming covered with a hard dry scab. Healing occurs with routine first aid care. Normal pigmentation may take months to return.
Hair regrowth begins 9 weeks after exposure and is complete in 6 months.
The short half-life means that the initial radiation intensity of I-131 is high, but it disappears quickly. If uncontaminated fodder can be provided for a month or two, or if dry or canned milk can be consumed for the same period, there is little risk of exposure.
If I-131 contaminated food is consumed, about one-third of the ingested iodine is deposited in the thyroid gland which weighs some 20 g in adults, and 2 g in infants. This can result in very high dose rates to the gland, with negligible exposures to the rest of the body. Due to the smaller glands of infants and children, and their high dairy consumption, they are particularly vulnerable to thyroid injury. Some Marshallese children received thyroid doses as high as 1150 rems. Most of the children receiving doses over 500 rems developed thyroid abnormalities within 10 years, including hypothyroidism and malignancies.
I-131 exposure can be prevented by prompt
consumption of potassium iodide supplements. Large doses of potassium
iodide saturate the body with iodine and prevent any subsequent retention
of radioiodine that is consumed. 126.96.36.199.3 Fetal Injury
Chronic Radiation Exposure
The exposure time scaling law given above also indicates that a slow onset of symptoms characteristic of acute radiation sickness can occur. As an example, the most heavily contaminated location of the Rongelap atoll (160 km downwind of the March 1, 1954 15 Mt Castle Bravo test), received a total accumulated exposure of 3300 rads. Of this, 1100 rads was accumulated during the interval from 1 month to 1 year following the test. If the site had been occupied during this period, the effective exposure for radiation sickness effects would be 1100/(48 weeks)^0.26 = 403 rads.
A megaton of fission yield produces enough Cs-137 to contaminate 100 km^2 with a radiation field of 200 rad/year. A megaton-range ground burst can contaminate an area of thousands of square kilometers with concentrations that would exceed occupational safety guidelines. 3,000 megatons of fission yield, if distributed globally by stratospheric fallout, would double the world's background radiation level from external exposure to this isotope alone.
It is possible to substantially reduce external exposure in contaminated areas by remaining indoors as much as possible. Exposure can be reduced by a factor of 2-3 for a frame house, or 10-100 for a multi-story building, and adding additional shielding to areas where much time is spent (like the bedroom) can increase these factors substantially. Since the half-life of Cs-137 is long, these would be permanent lifestyle adjustments. Such measures have been necessary (especially for children) in areas of Belarus that were heavily contaminated by Chernobyl.
Radioisotopes may be taken up into plants through the root system, or they may be contaminated by fallout descending on the leaves. Gross contamination of food plants or fodder from the fallout plume of a ground burst is an obvious hazard, but the gradual descent of worldwide fallout is also a problem.
The primary risks for internal exposure are cesium-137 and strontium-90. Strontium-89, transuranics alpha emitters, and carbon-14 are also significant sources of concern.
Only a few curies of radioisotopes per km^2 are sufficient to render land unsuitable for cultivation under current radiation safety standards. A megaton of fission yield can thus make some 200,000 km^2 useless for food production for decades. Depression of leukocyte levels have been observed in people in Belarus living in areas that were contaminated with only 0.2 curies/km^2.
Strontium 90 and 89
Sr-90 (28.1 yr half-life) thus can cause long term damage, while Sr-89 (52 days) can cause significant short term injury. Safety exposure standards impose a Sr-90 body burden limit of 2 microcuries (14 nanograms) for occupational exposure, 0.2 microcuries for individual members of the general population, and 0.067 microCi averaged over the whole population. It is estimated that 10 microCi per person would cause a substantial rise in the incidence of bone cancer. The explosion of several thousands of fission megatons in the atmosphere could raise the average body burden of the entire human race to above the occupational exposure limit for Sr-90 for a couple of generations. Contamination of 2 curies of Sr-90 per km^2 is the U.S. limit for food cultivation.
Alpha emitting heavy elements can be serious health risks also. The isotopes of primary concern here are those present in substantial quantities in nuclear weapons: short lived uranium isotopes (U-232 and U-233) and transuranic elements (primarily Pu-239, Pu-240, and Americium-241). These elements are hazardous if ingested due to radiotoxicity from the highly damaging alpha particles. The quantities of these isotopes present after a nuclear explosion are negligible compared to the amount of fission product radioisotopes. They represent a hazard when nuclear weapons are involved in "broken arrow" incidents, that is, accidents where the fissile isotopes inside are released. The exposure areas are of course small, compared to the areas threatened by fallout from a nuclear detonation. A typical nuclear weapon will contain some 300-600 curies of alpha emitter (assuming 5 kg plutonium). The isotope breakdown is approximately: 300 curies Pu-239, 60 curies Pu-240, and up to 250 curies of Am-241.
If small particles of alpha emitters are inhaled, they can take up permanent residence in the lung and form a serious source of radiation exposure to the lung tissue. A microcurie of alpha emitter deposited in the lungs produce an exposure of 3700 rems/yr to lung tissue, an extremely serious cancer risk.
Uranium and the transuranic elements are all bone-seekers (with the exception of neptunium). If absorbed, they are deposited in the bone and present a serious exposure risk to bone tissue and marrow. Plutonium has a biological half-life of 80-100 years when deposited in bone, it is also concentrated in the liver with a biological half-life of 40 years. The maximum permissible occupational body burden for plutonium-239 is 0.6 micrograms (.0375 microcuries) and 0.26 micrograms for lung burden (0.016 microCi).
Carbon-14 is a weak beta particle emitter,
with a low level of activity due to its long half-life. It presents a
unique hazard however since, unlike other isotopes, it is incorporated
directly into genetic material as a permanent part throughout the body.
This means that it presents a hazard out of proportion to the received
radiation dose as normally calculated. 188.8.131.52.3 Cancer
The current state-of-the-art in low level risk estimation is the 1990 report issued by the National Academy of Sciences Committee on Biological Effects of Ionizing Radiation (BEIR) entitled _Health Effects of Exposure to Low Levels of Ionizing Radiation_, also known as BEIR V.
As a general rule of thumb, it appears that cancer risk is more or less proportional to total radiation exposure, regardless of the quantity, rate or duration. 500 rems received over a decade is thus as serious a risk as 500 rems received all at once, and 50 rems is one-tenth as bad as 500. There is no evidence of a threshold effect or "safe dose". Safety standards are established primarily to keep the increased incidence of cancer below detectable levels.
Significant deviations from the above rule of proportionality for total exposure do occur. In particular, low doses (for which the risk is small anyway) received over an extended period of time are significantly less carcinogenic (by about a factor of 2) than the same dose received all at once.
Cancer risk to radiation exposure can be expressed as the increase in the lifetime probability of contracting fatal cancer per unit of radiation. The current estimate of overall risk is about a 0.8% chance of cancer per 10 rems for both men and women, averaged over the age distribution of the U.S. population. Thus a 1000 rem lifetime whole body radiation exposure would bring about a 80% chance of contracting fatal cancer, in addition to the normal incidence of cancer (about 20%). The risk for children appears to be about twice as great (due at least partly to the fact that they will live longer after exposure, and thus have greater opportunity to contract cancer).
There are also risk coefficient for specific tissue exposures. These are (approximately): Female Breast 1.0%/100 rems Bone Marrow 0.2%/100 rems (0.4% for children) Bone Tissue 0.05%/100 rems Lung 0.2%/100 rems
Two factors act to limit the effective radiation exposure for genetic effects, one for acute exposures, the other for chronic exposures. High acute exposures to the reproductive organs can cause permanent sterility, which prevents transmission of genetic effects. The cumulative effect of chronic exposure is limited by the fact that only exposures prior to reproduction count. Since most reproduction occurs before the age of 30, exposures after that age have little effect on the population.
It is estimated that the dose to reproductive tissue required to double the natural incidence of genetic disorders is 100-200 rems. The initial rate of observable disorders (the first generation) is only about 1/3 of the eventual rate once genetic equilibrium is established. Of course increases in the rate of genetic disorders (especially in a large population) is a _permanent_ alteration of the human species. | http://www.cartage.org.lb/en/themes/Sciences/Chemistry/NuclearChemistry/NuclearWeapons/FirstChainReaction/EffectsNucl/Mechanisms.htm | 13 |
138 | In many ways, computers count just like humans. So, before we start learning how computers count, let's take a deeper look at how we count.
How many fingers do you have? No, it's not a trick question. Humans (normally) have ten fingers. Why is that significant? Look at our numbering system. At what point does a one-digit number become a two-digit number? That's right, at ten. Humans count and do math using a base ten numbering system. Base ten means that we group everything in tens. Let's say we're counting sheep. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Why did we all of a sudden now have two digits, and re-use the 1 ? That's because we're grouping our numbers by ten, and we have 1 group of ten sheep. Okay, let's go to the next number 11. That means we have 1 group of ten sheep, and 1 sheep left ungrouped. So we continue - 12, 13, 14, 15, 16, 17, 18, 19, 20. Now we have 2 groups of ten. 21 - 2 groups of ten, and 1 sheep ungrouped. 22 - 2 groups of ten, and 2 sheep ungrouped. So, let's say we keep counting, and get to 97, 98, 99, and 100. Look, it happened again! What happens at 100? We now have ten groups of ten. At 101 we have ten groups of ten, and 1 ungrouped sheep. So we can look at any number like this. If we counted 60879 sheep, that would mean that we had 6 groups of ten groups of ten groups of ten groups of ten, 0 groups of ten groups of ten groups of ten, 8 groups of ten groups of ten, 7 groups of ten, and 9 sheep left ungrouped.
So, is there anything significant about grouping things by ten? No! It's just that grouping by ten is how we've always done it, because we have ten fingers. We could have grouped at nine or at eleven (in which case we would have had to make up a new symbol). The only difference between the different groupings of numbers is that we have to re-learn our multiplication, addition, subtraction, and division tables for each grouping. The rules haven't changed, just the way we represent them. Also, some of our tricks that we learned don't always apply, either. For example, let's say we grouped by nine instead of ten. Moving the decimal point one digit to the right no longer multiplies by ten, it now multiplies by nine. In base nine, 500 is only nine times as large as 50.
The question is, how many fingers does the computer have to count with? The computer only has two fingers. So that means all of the groups are groups of two. So, let's count in binary - 0 (zero), 1 (one), 10 (two - one group of two), 11 (three - one group of two and one left over), 100 (four - two groups of two), 101 (five - two groups of two and one left over), 110 (six - two groups of two and one group of two), and so on. In base two, moving the decimal one digit to the right multiplies by two, and moving it to the left divides by two. Base two is also referred to as binary.
The nice thing about base two is that the basic math tables are very short. In base ten, the multiplication tables are ten columns wide, and ten columns tall. In base two, it is very simple:
So, let's add the numbers 10010101 with 1100101:
Now, let's multiply them:
Let's learn how to convert numbers from binary (base two) to decimal (base ten). This is actually a rather simple process. If you remember, each digit stands for some grouping of two. So, we just need to add up what each digit represents, and we will have a decimal number. Take the binary number 10010101. To find out what it is in decimal, we take it apart like this:
1 0 0 1 0 1 0 1
| | | | | | | |
| | | | | | | Individual units (2^0)
| | | | | | 0 groups of 2 (2^1)
| | | | | 1 group of 4 (2^2)
| | | | 0 groups of 8 (2^3)
| | | 1 group of 16 (2^4)
| | 0 groups of 32 (2^5)
| 0 groups of 64 (2^6)
1 group of 128 (2^7)
and then we add all of the pieces together, like this:
1*128 + 0*64 + 0*32 + 1*16 + 0*8 + 1*4 + 0*2 + 1*1 =
128 + 16 + 4 + 1 =
So 10010101 in binary is 149 in decimal. Let's look at 1100101. It can be written as
1*64 + 1*32 + 0 * 16 + 0*8 + 1*4 + 0*2 + 1*1 =
64 + 32 + 4 + 1 =
So we see that 1100101 in binary is 101 in decimal. Let's look at one more number, 11101011001001. You can convert it to decimal by doing
Now, if you've been paying attention, you have noticed that the numbers we just converted are the same ones we used to multiply with earlier. So, let's check our results: 101 * 149 = 15049. It worked!
Now let's look at going from decimal back to binary. In order to do the conversion, you have to divide the number into groups of two. So, let's say you had the number 17. If you divide it by two, you get 8 with 1 left over. So that means there are 8 groups of two, and 1 ungrouped. That means that the rightmost digit will be 1. Now, we have the rigtmost digit figured out, and 8 groups of 2 left over. Now, let's see how many groups of two groups of two we have, by dividing 8 by 2. We get 4, with nothing left over. That means that all groups two can be further divided into more groups of two. So, we have 0 groups of only two. So the next digit to the left is 0. So, we divide 4 by 2 and get two, with 0 left over, so the next digit is 0. Then, we divide 2 by 2 and get 1, with 0 left over. So the next digit is 0. Finally, we divide 1 by 2 and get 0 with 1 left over, so the next digit to the left is 1. Now, there's nothing left, so we're done. So, the number we wound up with is 10001.
Previously, we converted to binary 11101011001001 to decimal 15049. Let's do the reverse to make sure that we did it right:
15049 / 2 = 7524 Remaining 1
7524 / 2 = 3762 Remaining 0
3762 / 2 = 1881 Remaining 0
1881 / 2 = 940 Remaining 1
940 / 2 = 470 Remaining 0
470 / 2 = 235 Remaining 0
235 / 2 = 117 Remaining 1
117 / 2 = 58 Remaining 1
58 / 2 = 29 Remaining 0
29 / 2 = 14 Remaining 1
14 / 2 = 7 Remaining 0
7 / 2 = 3 Remaining 1
3 / 2 = 1 Remaining 1
1 / 2 = 0 Remaining 1
Then, we put the remaining numbers back together, and we have the original number! Remember the first division remainder goes to the far right, so from the bottom up you have 11101011001001.
Each digit in a binary number is called a bit, which stands for binary digit. Remember, computers divide up their memory into storage locations called bytes. Each storage location on an x86 processor (and most others) is 8 bits long. Earlier we said that a byte can hold any number between 0 and 255. The reason for this is that the largest number you can fit into 8 bits is 255. You can see this for yourself if you convert binary 11111111 into decimal:
(1 * 2^7) + (1 * 2^6) + (1 * 2^5) + (1 * 2^4) + (1 * 2^3)
+ (1 * 2^2) + (1 * 2^1) + (1 * 2^0) =
128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 =
The largest number that you can hold in 16 bits is 65535. The largest number you can hold in 32 bits is 4294967295 (4 billion). The largest number you can hold in 64 bits is 18,446,744,073,709,551,615. The largest number you can hold in 128 bits is 340,282,366,920,938,463,463,374,607,431,768,211,456. Anyway, you see the picture. For x86 processors, most of the time you will deal with 4-byte numbers (32 bits), because that's the size of the registers.
Now we've seen that the computer stores everything as sequences of 1's and 0's. Let's look at some other uses of this. What if, instead of looking at a sequence of bits as a number, we instead looked at it as a set of switches. For example, let's say there are four switches that control lighting in the house. We have a switch for outside lights, a switch for the hallway lights, a switch for the living room lights, and a switch for the bedroom lights. We could make a little table showing which of these were on and off, like so:
Outside Hallway Living Room Bedroom
On Off On On
It's obvious from looking at this that all of the lights are on except the hallway ones. Now, instead of using the words "On" and "Off", let's use the numbers 1 and 0. 1 will represent on, and 0 will represent off. So, we could represent the same information as
Outside Hallway Living Room Bedroom
1 0 1 1
Now, instead of having labels on the light switches, let's say we just memorized which position went with which switch. Then, the same information could be represented as
1 0 1 1
This is just one of many ways you can use the computers storage locations to represent more than just numbers. The computers memory just sees numbers, but programmers can use these numbers to represent anything their imaginations can come up with. They just sometimes have to be creative when figuring out the best representation.
Not only can you do regular arithmetic with binary numbers, they also have a few operations of their own, called binary or logical operations . The standard binary operations are
Before we look at examples, I'll describe them for you. AND takes two bits and returns one bit. AND will return a 1 only if both bits are 1, and a 0 otherwise. For example, 1 AND 1 is 1, but 1 AND 0 is 0, 0 AND 1 is 0, and 0 AND 0 is 0.
OR takes two bits and returns one bit. It will return 1 if either of the original bits is 1. For example, 1 OR 1 is 1, 1 OR 0 is one, 0 OR 1 is 1, but 0 OR 0 is 0.
NOT only takes one bit and returns its opposite. NOT 1 is 0 and NOT 0 is 1.
Finally, XOR is like OR, except it returns 0 if both bits are 1.
Computers can do these operations on whole registers at a time. For example, if a register has 10100010101010010101101100101010 and another one has 10001000010101010101010101111010, you can run any of these operations on the whole registers. For example, if we were to AND them, the computer will run from the first bit to the 32nd and run the AND operation on that bit in both registers. In this case:
You'll see that the resulting set of bits only has a one where both numbers had a one, and in every other position it has a zero. Let's look at what an OR looks like:
In this case, the resulting number has a 1 where either number has a 1 in the given position. Let's look at the NOT operation:
This just reverses each digit. Finally, we have XOR, which is like an OR, except if both digits are 1, it returns 0.
This is the same two numbers used in the OR operation, so you can compare how they work. Also, if you XOR a number with itself, you will always get 0, like this:
These operations are useful for two reasons:
The computer can do them extremely fast
You can use them to compare many truth values at the same time
You may not have known that different instructions execute at different speeds. It's true, they do. And these operations are the fastest on most processors. For example, you saw that XORing a number with itself produces 0. Well, the XOR operation is faster than the loading operation, so many programmers use it to load a register with zero. For example, the code
movl $0, %eax
is often replaced by
xorl %eax, %eax
We'll discuss speed more in Chapter 12, but I want you to see how programmers often do tricky things, especially with these binary operators, to make things fast. Now let's look at how we can use these operators to manipulate true/false values. Earlier we discussed how binary numbers can be used to represent any number of things. Let's use binary numbers to represent what things my Dad and I like. First, let's look at the things I like:
Heavy Metal Music: yes
Wearing Dressy Clothes: no
Now, let's look at what my Dad likes:
Heavy Metal Music: no
Wearing Dressy Clothes: yes
Now, let's use a 1 to say yes we like something, and a 0 to say no we don't. Now we have:
Heavy Metal Music: 1
Wearing Dressy Clothes: 0
Heavy Metal Music: 0
Wearing Dressy Clothes: 1
Now, if we just memorize which position each of these are in, we have
Now, let's see we want to get a list of things both my Dad and I like. You would use the AND operation. So
Which translates to
Remember, the computer has no idea what the ones and zeroes represent. That's your job and your program's job. If you wrote a program around this representation your program would at some point examine each bit and have code to tell the user what it's for (if you asked a computer what two people agreed on and it answered 1001, it wouldn't be very useful). Anyway, let's say we want to know the things that we disagree on. For that we would use XOR, because it will return 1 only if one or the other is 1, but not both. So
And I'll let you translate that back out.
The previous operations: AND, OR, NOT, and XOR are called boolean operators because they were first studied by George Boole. So, if someone mentiones boolean operators or boolean algebra, you now know what they are talking about.
In addition to the boolean operations, there are also two binary operators that aren't boolean, shift and rotate. Shifts and rotates each do what their name implies, and can do so to the right or the left. A left shift moves each digit of a binary number one space to the left, puts a zero in the ones spot, and chops off the furthest digit to the left. A left rotate does the same thing, but takes the furthest digit to the left and puts it in the ones spot. For example,
Shift left 10010111 = 00101110
Rotate left 10010111 = 00101111
Notice that if you rotate a number for every digit it has (i.e. - rotating a 32-bit number 32 times), you wind up with the same number you started with. However, if you shift a number for every digit you have, you wind up with 0. So, what are these shifts useful for? Well, if you have binary numbers representing things, you use shifts to peek at each individual value. Let's say, for instance, that we had my Dad's likes stored in a register (32 bits). It would look like this:
Now, as we said previously, this doesn't work as program output. So, in order to do output, we would need to do shifting and masking. Masking is the process of eliminating everything you don't want. In this case, for every value we are looking for, we will shift the number so that value is in the ones place, and then mask that digit so that it is all we see. Masking is accomplished by doing an AND with a number that has the bits we are interested in set to 1. For example, let's say we wanted to print out whether my Dad likes dressy clothes or not. That data is the second value from the right. So, we have to shift the number right 1 digit so it looks like this:
and then, we just want to look at that digit, so we mask it by ANDing it with 00000000000000000000000000000001.
This will make the value of the register 1 if my Dad likes dressy clothes, and 0 if he doesn't. Then we can do a comparison to 1 and print the results. The code would look like this:
#NOTE - assume that the register %ebx holds
# my Dad's preferences
movl %ebx, %eax #This copies the information into %eax so
#we don't lose the original data
shrl $1, %eax #This is the shift operator. It stands
#for Shift Right Long. This first number
#is the number of positions to shift,
#and the second is the register to shift
#This does the masking
andl $0b00000000000000000000000000000001, %eax
#Check to see if the result is 1 or 0
cmpl $0b00000000000000000000000000000001, %eax
And then we would have two labels which printed something about whether or not he likes dressy clothes and then exits. The 0b notation means that what follows is a binary number. In this case it wasn't needed, because 1 is the same in any numbering system, but I put it there for clarity. We also didn't need the 31 zeroes, but I put them in to make a point that the number you are using is 32 bits.
When a number represents a set of options for a function or system call, the individual true/false elements are called flags. Many system calls have numerous options that are all set in the same register using a mechanism like we've described. The open system call, for example, has as its second parameter a list of flags to tell the operating system how to open the file. Some of the flags include:
This flag is 0b00000000000000000000000000000001 in binary, or 01 in octal (or any number system for that matter). This says to open the file in write-only mode.
This flag is 0b00000000000000000000000000000010 in binary, or 02 in octal. This says to open the file for both reading and writing.
This flag is 0b00000000000000000000000001000000 in binary, or 0100 in octal. It means to create the file if it doesn't already exist.
This flag is 0b00000000000000000000001000000000 in binary, or 01000 in octal. It means to erase the contents of the file if the file already exists.
This flag is 0b00000000000000000000010000000000 in binary, or 02000 in octal. It means to start writing at the end of the file rather than at the beginning.
To use these flags, you simply OR them together in the combination that you want. For example, to open a file in write-only mode, and have it create the file if it doesn't exist, I would use O_WRONLY (01) and O_CREAT (0100). OR'd together, I would have 0101.
Note that if you don't set either O_WRONLY or O_RDWR, then the file is automatically opened in read-only mode (O_RDONLY, except that it isn't really a flag since it's zero).
Many functions and system calls use flags for options, as it allows a single word to hold up to 32 possible options if each option is represented by a single bit.
We've seen how bits on a register can be used to give the answers of yes/no and true/false statements. On your computer, there is a register called the program status register. This register holds a lot of information about what happens in a computation. For example, have you ever wondered what would happen if you added two numbers and the result was larger than would fit in a register? The program status register has a flag called the carry flag. You can test it to see if the last computation overflowed the register. There are flags for a number of different statuses. In fact, when you do a compare (cmpl) instruction, the result is stored in this register. The conditional jump instructions (jge, jne, etc) use these results to tell whether or not they should jump. jmp, the unconditional jump, doesn't care what is in the status register, since it is unconditional.
Let's say you needed to store a number larger than 32 bits. So, let's say the number is 2 registers wide, or 64 bits. How could you handle this? If you wanted to add two 64 bit numbers, you would add the least significant registers first. Then, if you detected an carry, you could add I to the most significant register. In fact, this is probably the way you learned to do decimal addition. If the result in one column is more than 9, you simply carried the number to the next most significant column. If you added 65 and 37, first you add 7 and 4 to get 12. You keep the 2 in the right column, and carry the one to the next column. There you add 6, 3, and the 1 you carried. This results in 10. So, you keep the zero in that column and carry the one to the next most significant column, which is empty, so you just put the one there. Luckily, 32 bits is usually big enough to hold the numbers we use regularly.
Additional program status register flags are examined in Appendix B.
What we have studied so far only applies to positive integers. However, real-world numbers are not always positive integers. Negative numbers and numbers with decimals are also used.
So far, the only numbers we've dealt with are integers - numbers with no decimal point. Computers have a general problem with numbers with decimal points, because computers can only store fixed-size, finite values. Decimal numbers can be any length, including infinite length (think of a repeating decimal, like the result of 1 / 3).
The way a computer handles decimals is by storing them at a fixed precision (number of significant bits). A computer stores decimal numbers in two parts - the exponent and the mantissa. The mantissa contains the actual digits that will be used, and the exponent is what magnitude the number is. For example, 12345.2 can be represented as 1.23452 * 10^4. The mantissa is 1.23452 and the exponent is 4 with a base of 10. Computers, however, use a base of 2. All numbers are stored as X.XXXXX * 2^XXXX. The number 1, for example, is stored as 1.00000 * 2^0. Separating the mantissa and the exponent into two different values is called a floating point representation, because the position of the significant digits with respect to the decimal point can vary based on the exponent.
Now, the mantissa and the exponent are only so long, which leads to some interesting problems. For example, when a computer stores an integer, if you add 1 to it, the resulting number is one larger. This does not necessarily happen with floating point numbers. If the number is sufficiently big, adding 1 to it might not even register in the mantissa (remember, both parts are only so long). This affects several things, especially order of operations. If you add 1.0 to a given floating point number, it might not even affect the number if it is large enough. For example, on x86 platforms, a four-byte floating-point number, although it can represent very large numbers, cannot have 1.0 added to it past 16777216.0, because it is no longer significant. The number no longer changes when 1.0 is added to it. So, if there is a multiplication followed by an addition it may give a different result than if the addition is performed first.
You should note that it takes most computers a lot longer to do floating-point arithmetic than it does integer arithmetic. So, for programs that really need speed, integers are mostly used.
How would you think that negative numbers on a computer might be represented? One thought might be to use the first digit of a number as the sign, so 00000000000000000000000000000001 would represent the number 1, and 10000000000000000000000000000001 would represent -1. This makes a lot of sense, and in fact some old processors work this way. However, it has some problems. First of all, it takes a lot more circuitry to add and subtract signed numbers represented this way. Even more problematic, this representation has a problem with the number 0. In this system, you could have both a negative and a positive 0. This leads to a lot of questions, like "should negative zero be equal to positive zero?", and "What should the sign of zero be in various circumstances?".
These problems were overcome by using a representation of negative numbers called two's complement representation. To get the negative representation of a number in two's complement form, you must perform the following steps:
Perform a NOT operation on the number
Add one to the resulting number
So, to get the negative of 00000000000000000000000000000001, you would first do a NOT operation, which gives 1111111111111111111111111111110, and then add one, giving 11111111111111111111111111111111. To get negative two, first take 00000000000000000000000000000010. The NOT of that number is 11111111111111111111111111111101. Adding one gives 11111111111111111111111111111110. With this representation, you can add numbers just as if they were positive, and come out with the right answers. For example, if you add one plus negative one in binary, you will notice that all of the numbers flip to zero. Also, the first digit still carries the sign bit, making it simple to determine whether or not the number is positive or negative. Negative numbers will always have a 1 in the leftmost bit. This also changes which numbers are valid for a given number of bits. With signed numbers, the possible magnitude of the values is split to allow for both positive and negative numbers. For example, a byte can normally have values up to 255. A signed byte, however, can store values from -128 to 127.
One thing to note about the two's complement representation of signed numbers is that, unlike unsigned quantities, if you increase the number of bits, you can't just add zeroes to the left of the number. For example, let's say we are dealing with four-bit quantities and we had the number -3, 1101. If we were to extend this into an eight-bit register, we could not represent it as 00001101 as this would represent 13, not -3. When you increase the size of a signed quantity in two's complement representation, you have to perform sign extension. Sign extension means that you have to pad the left-hand side of the quantity with whatever digit is in the sign digit when you add bits. So, if we extend a negative number by 4 digits, we should fill the new digits with a 1. If we extend a positive number by 4 digits, we should fill the new digits with a 0. So, the extension of -3 from four to eight bits will yield 11111101.
The x86 processor has different forms of several instructions depending on whether they expect the quantities they operate on to be signed or unsigned. These are listed in Appendix B. For example, the x86 processor has both a sign-preserving shift-right, sarl, and a shift-right which does not preserve the sign bit, shrl.
The numbering systems discussed so far have been decimal and binary. However, two others are used common in computing - octal and hexadecimal. In fact, they are probably written more often than binary. Octal is a representation that only uses the numbers 0 through 7. So the octal number 10 is actually 8 in decimal because it is one group of eight. Octal 121 is decimal 81 (one group of 64 (8^2), two groups of 8, and one left over). What makes octal nice is that every 3 binary digits make one octal digit (there is no such grouping of binary digits into decimal). So 0 is 000, 1 is 001, 2 is 010, 3 is 011, 4 is 100, 5 is 101, 6 is 110, and 7 is 111.
Permissions in Linux are done using octal. This is because Linux permissions are based on the ability to read, write and execute. The first bit is the read permission, the second bit is the write permission, and the third bit is the execute permission. So, 0 (000) gives no permissions, 6(110) gives read and write permission, and 5 (101) gives read and execute permissions. These numbers are then used for the three different sets of permissions - the owner, the group, and everyone else. The number 0644 means read and write for the first permission set, and read-only for the second and third set. The first permission set is for the owner of the file. The third permission set is for the group owner of the file. The last permission set is for everyone else. So, 0751 means that the owner of the file can read, write, and execute the file, the group members can read and execute the file, and everyone else can only execute the file.
Anyway, as you can see, octal is used to group bits (binary digits) into threes. The way the assembler knows that a number is octal is because octal numbers are prefixed with a zero. For example 010 means 10 in octal, which is 8 in decimal. If you just write 10 that means 10 in decimal. The beginning zero is what differentiates the two. So, be careful not to put any leading zeroes in front of decimal numbers, or they will be interepreted as octal numbers!
Hexadecimal numbers (also called just "hex") use the numbers 1–15 for each digit. however, since 10–15 don't have their own numbers, hexadecimal uses the letters a through f to represent them. For example, the letter a represents 10, the letter b represents 11, and so on. 10 in hexadecimal is 16 in decimal. In octal, each digit represented three bits. In hexadecimal, each digit represents four bits. Every two digits is a full byte, and eight digits is a 32-bit word. So you see, it is considerably easier to write a hexadecimal number than it is to write a binary number, because it's only a quarter as many digits. The most important number to remember in hexadecimal is f, which means that all bits are set. So, if I want to set all of the bits of a register to 1, I can just do
Which is considerably easier and less error-prone than writing
movl $0b11111111111111111111111111111111, %eax
Note also that hexadecimal numbers are prefixed with 0x. So, when we do
We are calling interrupt number 128 (8 groups of 16), or interrupt number 0b00000000000000000000000010000000.
Hexadecimal and octal numbers take some getting used to, but they are heavily used in computer programming. It might be worthwhile to make up some numbers in hex and try to convert them back and forth to binary, decimal, and octal.
One thing that confuses many people when dealing with bits and bytes on a low level is that, when bytes are written from registers to memory, their bytes are written out least-significant-portion-first. What most people expect is that if they have a word in a register, say 0x5d 23 ef ee (the spacing is so you can see where the bytes are), the bytes will be written to memory in that order. However, on x86 processors, the bytes are actually written in reverse order. In memory the bytes would be 0xee ef 23 5d on x86 processors. The bytes are written in reverse order from what they would appear conceptually, but the bits within the bytes are ordered normally.
Not all processors behave this way. The x86 processor is a little-endian processor, which means that it stores the "little end", or least-significant byte of its words first.
Other processors are big-endian processors, which means that they store the "big end", or most significant byte, of their words first, the way we would naturally read a number.
This difference is not normally a problem (although it has sparked many technical controversies throughout the years). Because the bytes are reversed again (or not, if it is a big-endian processor) when being read back into a register, the programmer usually never notices what order the bytes are in. The byte-switching magic happens automatically behind the scenes during register-to-memory transfers. However, the byte order can cause problems in several instances:
If you try to read in several bytes at a time using movl but deal with them on a byte-by-byte basis using the least significant byte (i.e. - by using %al and/or shifting of the register), this will be in a different order than they appear in memory.
If you read or write files written for different architectures, you may have to account for whatever order they write their bytes in.
If you read or write to network sockets, you may have to account for a different byte order in the protocol.
As long as you are aware of the issue, it usually isn't a big deal. For more in-depth look at byte order issues, you should read DAV's Endian FAQ at http://www.rdrop.com/~cary/html/endian_faq.html, especially the article "On Holy Wars and a Plea for Peace" by Daniel Cohen.
Significance in this context is referring to which digit they represent. For example, in the number 294, the digit 2 is the most significant because it represents the hundreds place, 9 is the next most significant, and 4 is the least significant.
So far, we have been unable to display any number stored to the user, except by the extremely limitted means of passing it through exit codes. In this section, we will discuss converting positive numbers into strings for display.
The function will be called integer2string, and it will take two parameters - an integer to convert and a string buffer filled with null characters (zeroes). The buffer will be assumed to be big enough to store the entire number as a string.(at least 11 characters long, to include a trailing null character).
Remember that the way that we see numbers is in base 10. Therefore, to access the individual decimal digits of a number, we need to be dividing by 10 and displaying the remainder for each digit. Therefore, the process will look like this:
Divide the number by ten
The remainder is the current digit. Convert it to a character and store it.
We are finished if the quotient is zero.
Otherwise, take the quotient and the next location in the buffer and repeat the process.
The only problem is that since this process deals with the one's place first, it will leave the number backwards. Therefore, we will have to finish by reversing the characters. We will do this by storing the characters on the stack as we compute them. This way, as we pop them back off to fill in the buffer, it will be in the reverse order that we pushed them on.
The code for the function should be put in a file called integer-to-string.s and should be entered as follows:
#PURPOSE: Convert an integer number to a decimal string
# for display
#INPUT: A buffer large enough to hold the largest
# possible number
# An integer to convert
#OUTPUT: The buffer will be overwritten with the
# decimal string
# %ecx will hold the count of characters processed
# %eax will hold the current value
# %edi will hold the base (10)
.equ ST_VALUE, 8
.equ ST_BUFFER, 12
.type integer2string, @function
#Normal function beginning
movl %esp, %ebp
#Current character count
movl $0, %ecx
#Move the value into position
movl ST_VALUE(%ebp), %eax
#When we divide by 10, the 10
#must be in a register or memory location
movl $10, %edi
#Division is actually performed on the
#combined %edx:%eax register, so first
#clear out %edx
movl $0, %edx
#Divide %edx:%eax (which are implied) by 10.
#Store the quotient in %eax and the remainder
#in %edx (both of which are implied).
#Quotient is in the right place. %edx has
#the remainder, which now needs to be converted
#into a number. So, %edx has a number that is
#0 through 9. You could also interpret this as
#an index on the ASCII table starting from the
#character '0'. The ascii code for '0' plus zero
#is still the ascii code for '0'. The ascii code
#for '0' plus 1 is the ascii code for the
#character '1'. Therefore, the following
#instruction will give us the character for the
#number stored in %edx
addl $'0', %edx
#Now we will take this value and push it on the
#stack. This way, when we are done, we can just
#pop off the characters one-by-one and they will
#be in the right order. Note that we are pushing
#the whole register, but we only need the byte
#in %dl (the last byte of the %edx register) for
#Increment the digit count
#Check to see if %eax is zero yet, go to next
#step if so.
cmpl $0, %eax
#%eax already has its new value.
#The string is now on the stack, if we pop it
#off a character at a time we can copy it into
#the buffer and be done.
#Get the pointer to the buffer in %edx
movl ST_BUFFER(%ebp), %edx
#We pushed a whole register, but we only need
#the last byte. So we are going to pop off to
#the entire %eax register, but then only move the
#small part (%al) into the character string.
movb %al, (%edx)
#Decreasing %ecx so we know when we are finished
#Increasing %edx so that it will be pointing to
#the next byte
#Check to see if we are finished
cmpl $0, %ecx
#If so, jump to the end of the function
#Otherwise, repeat the loop
#Done copying. Now write a null byte and return
movb $0, (%edx)
movl %ebp, %esp
To show this used in a full program, use the following code, along with the count_chars and write_newline functions written about in previous chapters. The code should be in a file called conversion-program.s.
#This is where it will be stored
movl %esp, %ebp
#Storage for the result
#Number to convert
addl $8, %esp
#Get the character count for our system call
addl $4, %esp
#The count goes in %edx for SYS_WRITE
movl %eax, %edx
#Make the system call
movl $SYS_ WRITE, %eax
movl $STDOUT, %ebx
movl $tmp_buffer, %ecx
#Write a carriage return
movl $SYS_EXIT, %eax
movl $0, %ebx
To build the program, issue the following commands:
as integer-to-string.s -o integer-to-number.o
as count-chars.s -o count-chars.o
as write-newline.s -o write-newline.o
as conversion-program.s -o conversion-program.o
ld integer-to-number.o count-chars.o write-newline.o conversion-
program.o -o conversion-program
To run just type ./conversion-program and the output should say 824.
Convert the decimal number 5,294 to binary.
What number does 0x0234aeff represent? Specify in binary, octal, and decimal.
Add the binary numbers 10111001 and 101011.
Multiply the binary numbers 1100 1010110.
Convert the results of the previous two problems into decimal.
Describe how AND, OR, NOT, and XOR work.
What is masking for?
What number would you use for the flags of the open system call if you wanted to open the file for writing, and create the file if it doesn't exist?
How would you represent -55 in a thirty-two bit register?
Sign-extend the previous quantity into a 64-bit register.
Describe the difference between little-endian and big-endian storage of words in memory.
Go back to previous programs that returned numeric results through the exit status code, and rewrite them to print out the results instead using our integer to string conversion function.
Modify the integer2string code to return results in octal rather than decimal.
Modify the integer2string code so that the conversion base is a parameter rather than hardcoded.
Write a function called is_negative that takes a single integer as a parameter and returns 1 if the parameter is negative, and 0 if the parameter is positive.
Modify the integer2string code so that the conversion base can be greater than 10 (this requires you to use letters for numbers past 9).
Create a function that does the reverse of integer2string called number2integer which takes a character string and converts it to a register-sized integer. Test it by running that integer back through the integer2string function and displaying the results.
Write a program that stores likes and dislikes into a single machine word, and then compares two sets of likes and dislikes for commonalities.
Write a program that reads a string of characters from STDIN and converts them to a number. | http://programminggroundup.blogspot.com/2007/01/chapter-10-counting-like-computer.html | 13 |
62 | History of Virginia
The History of Virginia began with settlement by eastern woodland Native Americans of the Algonquin language including the Powhatan and Rappahannock. Permanent English settlement began in Virginia with Jamestown in 1607. The colony nearly failed until tobacco emerged as a profitable export, grown primarily by indentured servants. Then following 1662, the colony hardened slavery into a racial caste by partus law. By 1750, the primary cultivators of the cash crop were West African descendants in hereditary slavery worked in the plantation agricultural system. Virginia and other southern colonies had become slave societies, with economies dependent on slavery and slaveholders forming the ruling class.
The Virginia Colony became the wealthiest and most populated British colony in North America, with General Assembly representatives from today’s West Virginia, Kentucky, Ohio and Illinois. The colony was dominated by elite planters who were also in control of the established Anglican Church. Baptist and Methodist preachers brought the Great Awakening, welcoming black members and leading to many evangelical and racially integrated churches. Virginia planters had a major role in gaining independence and the development of democratic-republican ideals of the United States. They were important in the Declaration of Independence, writing the Constitutional Convention (and preserving protection for the slave trade), and establishing the Bill of Rights. The state of Kentucky separated from Virginia in 1792. Four of the first five presidents were Virginians: George Washington, the “Father of his country”; and after 1800, “The Virginia Dynasty” of presidents for 24 years: Thomas Jefferson, James Madison, and James Monroe.
During the first half of the 19th century, tobacco declined as a commodity crop and planters adopted mixed farming, which required less labor. They sold surplus slaves "downriver" to the Deep South. The Constitutions of 1830 and 1850 expanded suffrage but did not equalize white male apportionment statewide. While population declined as people migrated west and south, Virginia was still the largest state joining the Confederate States of America in 1861. It became the major theater of war in the American Civil War. Unionists in western Virginia emerged as the separate state of West Virginia. Virginia was administered during early Reconstruction as Military District Number One. Virginia's economy was devastated in war and disrupted in Reconstruction. The first signs of recovery were seen in tobacco cultivation and the related cigarette industry. In 1883 conservative white Democrats regained power in the state government, ending Reconstruction and implementing Jim Crow laws. The 1902 Constitution limited the number of white voters below 19th-century levels and effectively disfranchised blacks until federal civil rights legislation of the mid-1960s.
From the 1920s to the 1960s, the state was dominated by the Byrd Organization, the “courthouse crowd,” with dominance by rural counties aligned in a Democratic party machine, but their hold was broken over their failed Massive Resistance to school integration. After World War II, the state's economy began to thrive, with a new industrial and urban base. Governor Mills Godwin, 1966–1970, 1974–1978, was the father of the statewide community college system. The first U.S. African-American governor was Virginia’s Douglas Wilder, 1990–1994. Since the late twentieth century, the contemporary economy has become more diversified in high-tech industries and defense-related businesses. Virginia’s changing demography makes for closely divided voting in national elections but it is still generally conservative in state politics.
For thousands of years, various cultures of indigenous peoples had already inhabited the portion of the New World later designated by the English monarch as "Virginia". Archaeological and historical research by anthropologist Helen Rountree and others has established 3,000 years of settlement in much of the Tidewater. Recent archaeological work at Pocahontas Island has revealed prehistoric habitation dating to about 6500 BCE.
At the end of the 16th century, Native Americans living in what is now Virginia were part of three major groups, based chiefly on language families. The largest group, known as the Algonquian, numbered over 10,000 and occupied most of the coastal area up to the fall line. Groups to the interior were the Iroquoian (numbering 2,500) and the Siouan. Tribes included the Algonquian Chesepian, Chickahominy, Doeg, Mattaponi, Nansemond, Pamunkey, Pohick, Powhatan, and Rappahannock; the Siouan Monacan and Saponi; and the Iroquoian-speaking Cherokee, Meherrin, Nottoway, and Tuscarora.
When the first English settlers arrived at Jamestown in 1607, Algonquian tribes controlled most of Virginia east of the fall line. Nearly all were united in what has been historically called the Powhatan Confederacy. Researcher Rountree has noted that empire more accurately describes their political structure. In the late 16th and early 17th centuries, a Chief named Wahunsunacock created this powerful empire by conquering or affiliating with approximately 30 tribes whose territories covered much of eastern Virginia. Known as the Powhatan, or paramount chief, he called this area Tenakomakah ("densely inhabited Land"). The empire was advantageous to some tribes, who were periodically threatened by other Native Americans, such as the Monacan.
The Native Americans had a different culture than the English. Despite some successful interaction, issues of ownership and control of land and other resources, and trust between the peoples, became areas of conflict. Virginia has drought conditions an average of every three years. The colonists did not understand that the natives were ill-prepared to feed them during hard times. In the years after 1612, the colonists cleared land to farm export tobacco, their crucial cash crop. As tobacco exhausted the soil, the settlers continually needed to clear more land for replacement. This reduced the wooded land which Native Americans depended on for hunting to supplement their food crops. As more colonists arrived, they wanted more land.
The tribes tried to fight the encroachment by the colonists. Major conflicts took place in the Indian massacre of 1622 and the Second Anglo-Powhatan war, both under the leadership of the late Chief Powhatan's younger brother, Chief Opechancanough. By the mid-17th century, the Powhatan and allied tribes were in serious decline in population, due in large part to epidemics of newly introduced infectious diseases, such as smallpox and measles, to which they had no natural immunity. The European colonists had expanded territory so that they controlled virtually all the land east of the fall line on the James River. Fifty years earlier, this territory had been the empire of the mighty Powhatan Confederacy.
Surviving members of many tribes assimilated into the general population of the colony. Some retained small communities with more traditional identity and heritage. In the 21st century, the Pamunkey and Mattaponi are the only two tribes to maintain reservations originally assigned under the English. As of 2010, the state has recognized eleven Virginia Indian tribes. Others have renewed interest in seeking state and Federal recognition since the celebration of the 400th anniversary of Jamestown in 2007. State celebrations gave Native American tribes prominent formal roles to showcase their contributions to the state.
Early European exploration
After their discovery of the New World in the 15th century, European states began trying to establish New World colonies. England, the Dutch Republic, France, Portugal, and Spain were the most active.
In 1540, a party led by two Spaniards, Juan de Villalobos and Francisco de Silvera, sent by Hernando de Soto, entered what is now Lee County in search of gold. In 1567, Hernando Moyano led a group of soldiers northward from Joara, called Fort San Juan by the Spanish, to attack and destroy the Chisca village of Maniatique near present-day Saltville.
Another Spanish party, captained by Antonio Velázquez in the caravel Santa Catalina, explored to the lower Chesapeake Bay region of Virginia in mid-1561 under the orders of Ángel de Villafañe. During this voyage, two Kiskiack youths, including Don Luis were taken back to Spain. In 1566, an expedition sent from Spanish Florida by Pedro Menéndez de Avilés reached the Delmarva Peninsula. The expedition consisted of two Dominican friars, thirty soldiers and Don Luis, in a failed effort to set up a Spanish colony in the Chesapeake, believing it to be an opening to the fabled Northwest Passage.
In 1570, Spanish Jesuits established the Ajacán Mission on the lower peninsula. However, in 1571 it was destroyed by Don Luis and a party of his indigenous allies. In August 1572, Pedro Menéndez de Avilés arrived from St. Augustine with thirty soldiers and sailors to take revenge for the massacre of the Jesuits, and hanged approximately 20 natives. In 1573, the governor of Spanish Florida, Pedro Menéndez de Márquez, conducted further exploration of the Chesapeake. In 1580s, captain Vicente González led several voyages into the Chesapeake in search of English settlements in the area. However, Spain did not attempt a colony after the failure of the Ajacán Mission.
The Roanoke Colony was the first English colony in the New World. It was founded at Roanoke Island in what was then Virginia, now part of Dare County, North Carolina. Between 1584 and 1587, there were two major groups of settlers sponsored by Sir Walter Raleigh who attempted to establish a permanent settlement at Roanoke Island, and each failed. The final group disappeared completely after supplies from England were delayed three years by a war with Spain. Because they disappeared, they were called "The Lost Colony."
The name Virginia came from information gathered by the Raleigh-sponsored English explorations along what is now the North Carolina coast. Philip Amadas and Arthur Barlowe reported that a regional "king" named Wingina ruled a land of Wingandacoa. Queen Elizabeth modified the name to "Virginia", perhaps in part noting her status as the "Virgin Queen." Though the word is latinate, it stands as the oldest English language place-name in the United States.
On the second voyage, Raleigh was to learn that, while the chief of the Secotans was indeed called Wingina, the expression wingandacoa heard by the English upon arrival actually meant "What good clothes you wear!" in Carolina Algonquian, and was not the native name of the country as previously misunderstood.
Virginia Company of London
After the death of Queen Elizabeth I, in 1603 King James I assumed the throne of England. After years of war, England was strapped for funds, so he granted responsibility for England's New World colonization to the Virginia Company, which became incorporated as a joint stock company by a proprietary charter drawn up in 1606. There were two competing branches of the Virginia Company and each hoped to establish a colony in Virginia in order to exploit gold (which the region did not actually have), to establish a base of support for English privateering against Spanish ships, and to spread Protestantism to the New World in competition with Spain's spread of Catholicism. Within the Virginia Company, the Plymouth Company branch was assigned a northern portion of the area known as Virginia, and the London Company area to the south.
In December, 1606, the London Company dispatched a group of 104 colonists in three ships: the Susan Constant, Godspeed, and Discovery, under the command of Captain Christopher Newport. After a long, rough voyage of 144 days, the colonists finally arrived in Virginia on April 26, 1607 at the entrance to the Chesapeake Bay. At Cape Henry, they went ashore, erected a cross, and did a small amount of exploring, an event which came to be called the "First Landing."
Under orders from London to seek a more inland location safe from Spanish raids, they explored the Hampton Roads area and sailed up the newly christened James River to the fall line at what would later became the cities of Richmond and Manchester.
After weeks of exploration, the colonists selected a location and founded Jamestown on May 14, 1607. It was named in honor of King James I (as was the river). However, while the location at Jamestown Island was favorable for defense against foreign ships, the low and marshy terrain was harsh and inhospitable for a settlement. It lacked drinking water, access to game for hunting, or much space for farming. While it seemed favorable that it was not inhabited by the Native Americans, within a short time, the colonists were attacked by members of the local Paspahegh tribe.
The colonists arrived ill-prepared to become self-sufficient. They had planned on trading with the Native Americans for food, were dependent upon periodic supplies from England, and had planned to spend some of their time seeking gold. Leaving the Discovery behind for their use, Captain Newport returned to England with the Susan Constant and the Godspeed, and came back twice during 1608 with the First Supply and Second Supply missions. Trading and relations with the Native Americans was tenuous at best, and many of the colonists died from disease, starvation, and conflicts with the Natives. After several failed leaders, Captain John Smith took charge of the settlement, and many credit him with sustaining the colony during its first years, as he had some success in trading for food and leading the discouraged colonists.
After Smith's return to England in August 1609, there was a long delay in the scheduled arrival of supplies. During the winter of 1609/10 and continuing into the spring and early summer, no more ships arrived. The colonists faced what became known as the "starving time". When the new governor Sir Thomas Gates, finally arrived at Jamestown on May 23, 1610, along with other survivors of the wreck of the Sea Venture that resulted in Bermuda being added to the territory of Virginia, he discovered over 80% of the 500 colonists had died; many of the survivors were sick.
Back in England, the Virginia Company was reorganized under its Second Charter, ratified on May 23, 1609, which gave most leadership authority of the colony to the governor, the newly-appointed Thomas West, 3rd Baron De La Warr. In June 1610, he arrived with 150 men and ample supplies. De La Warr began the First Anglo-Powhatan War, against the natives. Under his leadership, Samuel Argall kidnapped Pocahontas, daughter of the Powhatan chief, and held her at Henricus.
The economy of the Colony was another problem. Gold had never been found, and efforts to introduce profitable industries in the colony had all failed until John Rolfe introduced his two foreign types of tobacco: Orinoco and Sweet Scented. These produced a better crop than the local variety and with the first shipment to England in 1612, the customers enjoyed the flavor, thus making tobacco a cash crop that established Virginia's economic viability.
The First Anglo-Powhatan War ended when Rolfe married Pocahontas in 1614; peace was established.
During this time, perhaps 5000 Virginians died of disease or were killed in the Indian massacre of 1622.
George Yeardley took over as Governor of Virginia in 1619. He ended one-man rule and created a representative system of government with the General Assembly, the first elected legislative assembly in the New World.
Also in 1619, the Virginia Company sent 90 single women as potential wives for the male colonists to help populate the settlement. That same year the colony acquired a group of "twenty and odd" Angolans, brought by two English privateers. They were probably the first Africans in the colony. They, along with many European indentured servants helped to expand the growing tobacco industry which was already the colony's primary product. Although these black men were treated as indentured servants, this marked the beginning of America's history of slavery. Major importation of African slaves by both African and Europeans profiteers did not take place until much later in the century.
Also in 1619, the plantations and developments were divided into four "incorporations" or "citties" (sic), as they were called. These were Charles Cittie, Elizabeth Cittie, Henrico Cittie, and James Cittie, which included the relatively small seat of government for the colony at Jamestown Island. Each of the four "citties" (sic) extended across the James River, the main conduit of transportation of the era. Elizabeth Cittie, know initially as Kecoughtan (a Native word with many variations in spelling by the English), also included the areas now known as South Hampton Roads and the Eastern Shore.
In some areas, individual rather than communal land ownership or leaseholds were established, providing families with motivation to increase production, improve standards of living, and gain wealth. Perhaps nowhere was this more progressive at than Sir Thomas Dale's ill-fated Henricus, a westerly-lying development located along the south bank of the James River, where natives were also to be provided an education at the Colony's first college.
About 6 miles (9.7 km) south of the falls at present-day Richmond, in Henrico Cittie the Falling Creek Ironworks was established near the confluence of Falling Creek, using local ore deposits to make iron. It was the first in North America.
Virginians were intensely individualistic at this point, weakening the small new communities. According to Breen (1979) their horizon was limited by the present or near future. They believed that the environment could and should be forced to yield quick financial returns. Thus everyone was looking out for number one at the expense of the cooperative ventures. Farms were scattered and few villages or towns were formed. This extreme individualism led to the failure of the settlers to provide defense for themselves against the Indians, resulting in two massacres.
Conflict with natives
While the developments of 1619 and continued growth in the several following years were seen as favorable by the English, many aspects, especially the continued need for more land to grow tobacco, were the source of increasing concern to the Native Americans most affected, the Powhatan.
By this time, the remaining Powhatan Empire was led by Chief Opechancanough, chief of the Pamunkey, and brother of Chief Powhatan. He had earned a reputation as a fierce warrior under his brother's chiefdom. Soon, he gave up on hopes of diplomacy, and resolved to eradicate the English colonists.
On March 22, 1622, the Powhatan killed about 400 colonists in the Indian Massacre of 1622. With coordinated attacks, they struck almost all the English settlements along the James River, on both shores, from Newport News Point on the east at Hampton Roads all the way west upriver to Falling Creek, a few miles above Henricus and John Rolfe's plantation, Varina Farms.
At Jamestown, a warning by an Indian boy named Chanco to his employer, Richard Pace, helped reduce total deaths. Pace secured his plantation, and rowed across the river during the night to alert Jamestown, which allowed colonists some defensive preparation. They had no time to warn outposts, which suffered deaths and captives at almost every location. Several entire communities were essentially wiped out, including Henricus and Wolstenholme Towne at Martin's Hundred. At the Falling Creek Ironworks, which had been seen as promising for the Colony, two women and three children were among the 27 killed, leaving only two colonists alive. The facilities were destroyed.
Despite the losses, two thirds of the colonists survived; after withdrawing to Jamestown, many returned to the outlying plantations, although some were abandoned. The English carried out reprisals against the Powhatan and there were skirmishes and attacks for about a year before the colonists and Powhatan struck a truce.
The colonists invited the chiefs and warriors to Jamestown, where they proposed a toast of liquor. Dr. John Potts and some of the Jamestown leadership had poisoned the natives' share of the liquor, which killed about 200 men. Colonists killed another 50 Indians by hand.
The period between the coup of 1622 and another Powhatan attack on English colonists along the James River (see Jamestown) in 1644 marked a turning point in the relations between the Powhatan and the English. In the early period, each side believed it was operating from a position of power; by 1646, the colonists had taken the balance of power.
The colonists defined the 1644 coup as an "uprising". Chief Opechancanough expected the outcome would reflect what he considered the morally correct position: that the colonists were violating their pledges to the Powhatan. During the 1644 event, Chief Opechancanough was captured. While imprisoned, he was murdered by one of his guards. After the death of Opechancanough, and following the repeated colonial attacks in 1644 and 1645, the remaining Powhatan tribes had little alternative but to accede to the demands of the settlers.
In 1624, the Virginia Company's charter was revoked and the colony transferred to royal authority as a crown colony, but the elected representatives in Jamestown continued to exercise a fair amount of power. Under royal authority, the colony began to expand to the North and West with additional settlements. In 1630, under the governorship of John Harvey, the first settlement on the York River was founded. In 1632, the Virginia legislature voted to build a fort to link Jamestown and the York River settlement of Chiskiack and protect the colony from Indian attacks. This fort would become Middle Plantation and later Williamsburg, Virginia. In 1634, a palisade was built near Middle Plantation. This wall stretched across the peninsula between the York and James rivers and protected the settlements on the eastern side of the lower Peninsula from Indians. The wall also served to contain cattle.
Also in 1634, a new system of local government was created in the Virginia Colony by order of the King of England. Eight shires were designated, each with its own local officers. These shires were renamed as counties only a few years later. They were:
- Accomac (now Northampton County)
- Charles City Shire (now Charles City County)
- Charles River Shire (now York County)
- Elizabeth City Shire (existed as Elizabeth City County until 1952, when it was absorbed into the city of Hampton)
- Henrico (now Henrico County)
- James City Shire (now James City County)
- Warwick River Shire (existed as Warwick County until 1952, then the city of Warwick until 1958 when it was absorbed into the city of Newport News)
- Warrosquyoake Shire (now Isle of Wight County)
Of these, as of 2011, five of the eight original shires of Virginia are considered still extant in essentially their same political form (county), although some boundaries have changed. Also, including the earlier names of the cities (sic) in their names resulted in the source of some confusion, as that resulted in such seemingly contradictory names as "James City County" and "Charles City County".
The first significant attempts at exploring the Trans-Allegheny region occurred under the administration of Governor William Berkeley. Efforts to explore farther into Virginia were hampered in 1644 when about 500 colonists were killed in another Indian massacre led, once again, by Opechancanough. Berkeley is credited with efforts to develop others sources of income for the colony besides tobacco such as cultivation of mulberry trees for silkworms and other crops at his large Green Spring Plantation, now a largely unexplored archaeological site maintained by the National Park Service near Jamestown and Williamsburg.
Most Virginia colonists were loyal to the crown (Charles I) during the English Civil War, but in 1652, Oliver Cromwell sent a force to remove and replace Gov. Berkeley with Governor Richard Bennett, who was loyal to the Commonwealth of England. This governor was a moderate Puritan who allowed the local legislature to exercise most controlling authority, and spent much of his time directing affairs in neighboring Maryland Colony. Bennett was followed by two more "Cromwellian" governors, Edward Digges and Samuel Matthews, although in fact all three of these men were not technically appointees, but were selected by the House of Burgesses, which was really in control of the colony during these years.
Many royalists fled to Virginia after their defeat in the English Civil War. Many of them established what would become the most important families in Virginia. After the Restoration, in recognition of Virginia's loyalty to the crown, King Charles II of England bestowed Virginia with the nickname "The Old Dominion", which it still bears today.
Governor Berkeley, who remained popular after his first administration, returned to the governorship at the end of Commonwealth rule. However, Berkeley's second administration was characterized with many problems. Disease, hurricanes, Indian hostilities, and economic difficulties all plagued Virginia at this time. Berkeley established autocratic authority over the colony. To protect this power, he refused to have new legislative elections for 14 years in order to protect a House of Burgesses that supported him. He only agreed to new elections when rebellion became a serious threat.
Berkeley finally did face a rebellion in 1676. Indians had begun attacking encroaching settlers as they expanded to the north and west. Serious fighting broke out when settlers responded to violence with a counter-attack against the wrong tribe, which further extended the violence. Berkeley did not assist the settlers in their fight. Many settlers and historians believe Berkeley's refusal to fight the Indians stemmed from his investments in the fur trade. Large scale fighting would have cut off the Indian suppliers Berkeley's investment relied on. Nathaniel Bacon organized his own militia of settlers who retaliated against the Indians. Bacon became very popular as the primary opponent of Berkeley, not only on the issue of Indians, but on other issues as well. Berkeley condemned Bacon as a rebel, but pardoned him after Bacon won a seat in the House of Burgesses and accepted it peacefully. After a lack of reform, Bacon rebelled outright, captured Jamestown, and took control of the colony for several months. The incident became known as Bacon's Rebellion. Berkeley returned himself to power with the help of the English militia. Bacon burned Jamestown before abandoning it and continued his rebellion, but died of disease. Berkeley severely crushed the remaining rebels.
In response to Berkeley's harsh repression of the rebels, the English government removed him from office. After the burning of Jamestown, the capital was temporarily moved to Middle Plantation, located on the high ground of the Virginia Peninsula equidistant from the James and York Rivers.
College of William and Mary; capital relocated
Local leaders had long desired a school of higher education, for the sons of planters, and for educating the Indians. An earlier attempt to establish a permanent university at Henricus failed after the Indian Massacre of 1622 wiped out the entire settlement. Finally, seven decades later, with encouragement from the Colony's House of Burgesses and other prominent individuals, Reverend Dr. James Blair, the colony's top religious leader, prepared a plan. Blair went to England and in 1693, obtained a charter from King William and Queen Mary II of England. The college was named the College of William and Mary in honor of the two monarchs.
The rebuilt statehouse in Jamestown burned again in 1698. After that fire, upon suggestion of college students, the colonial capital was permanently moved to nearby Middle Plantation again, and the town was renamed Williamsburg, in honor of the king. Plans were made to construct a capitol building and plat the new city according to the survey of Theodorick Bland.
In 1716 he led an expedition of westward exploration, later known as the Knights of the Golden Horseshoe Expedition. Spotswood's party reached the top ridge of the Blue Ridge Mountains at Swift Run Gap (elevation 2,365 feet (721 m)). Such was the English colonials understanding of the extent of the land, that they thought they had reached the continental divide. There was some expectation that, "like Balboa", they would be overlooking the Pacific Ocean. Spotswood was also behind the creation of Germanna, a settlement of German immigrants brought over for the purpose of iron production, in modern-day Spotsylvania County, itself named after Spotswood.
As the English increasingly used tobacco products, tobacco in the American colonies became significant economic force, especially in the tidewater region surrounding the Chesapeake Bay. Vast plantations were built along the rivers of Virginia, and social/economic systems developed to grow and distribute this cash crop. Some elements of this system included the importation and employment of slaves to grow crops. Planters would then fill large hogsheads with tobacco and convey them to inspection warehouses. In 1730, the Virginia House of Burgesses standardized and improved quality of tobacco exported by establishing the Tobacco Inspection Act of 1730, which required inspectors to grade tobacco at 40 specified locations.
Historian Edmund Morgan (1975) argues that Virginians in the 1650s—and for the next two centuries—turned to slavery and a racial divide as an alternative to class conflict. "Racism made it possible for white Virginians to develop a devotion to the equality that English republicans had declared to be the soul of liberty." That is, white men became politically much more equal than was possible without a population of low-status slaves.
By 1700 the population reached 70,000 and continued to grow rapidly from a high birth rate, low death rate, importation of slaves from the Caribbean, and immigration from Britain and Germany, as well as from Pennsylvania. The climate was mild, the farm lands were cheap and fertile.
Historian Douglas Southall Freeman has explained the hierarchical social structure of the 1740s:
West of the fall line... the settlements fringed toward the frontier of the Blue Ridge and the Valley of the Shenandoah. Democracy was real where life was raw. In Tidewater, the flat country East of the fall line, there were no less than eight strata of society. The uppermost and the lowliest, the great proprietors and the Negro slaves, were supposed to be of immutable station. The others were small farmers, merchants, sailors, frontier folk, servants and convicts. Each of these constituted a distinct class at a given time, but individuals and families often shifted materially in station during a single generation. Titles hedged the ranks of the notables. Members of the Council of State were termed both "Colonel" and "Esquire." Large planters who did not bear arms almost always were given the courtesy title of "Gentlemen." So were Church Wardens, Vestrymen, Sheriffs and Trustees of towns. The full honors of a man of station were those of Vestryman [of the Church], Justice [lifetime member of the County Court, appointed by the legislature] and Burgess [elected member of the legislature]. Such an individual normally looked to England and especially to London and sought to live by the social standards of the mother country.
Religion in early Virginia
- Further information: Episcopal Diocese of Virginia:History
The Church of England was legally established in the colony in 1619, and authorities in England sent in 22 Anglican clergyman by 1624. In practice, establishment meant that local taxes were funneled through the local parish to handle the needs of local government, such as roads and poor relief, in addition to the salary of the minister. There never was a bishop in colonial Virginia, and in practice the local vestry consisted of laymen controlled the parish.
After five very difficult years, during which the majority of the new arrivals quickly died, the colony began to grow more successfully. As in England, the parish became a unit of local importance, equal in power and practical aspects to other entities, such as the courts and even the House of Burgesses and the Governor's Council (the two houses of the Virginia General Assembly). (A parish was normally led spiritually by a rector and governed by a committee of members generally respected in the community which was known as the vestry). A typical parish contained three or four churches, as the parish churches needed to be close enough for people to travel to worship services, where attendance was expected of everyone. Parishes typically had a church farm (or "glebe") to help support it financially.
Expansion and subdivision of the church parishes and, after 1634, the shires (or counties) followed population growth. The intention of the Virginia parish system was to place a church not more than six miles (10 km)-easy riding distance-from every home in the colony. The shires, soon after initial establishment in 1634 known as "counties", were planned to be not more than a day's ride from all residents, so that court and other business could be attended to in a practical manner.
In the 1740s, the established Anglican church had about 70 parish priests around the colony. There was no bishop, and indeed, there was fierce political opposition to having a bishop in the colony. The Anglican priests were supervised directly by the Bishop of London. Each county court gave tax money to the local vestry, composed of prominent layman. The vestry provided the priest a glebe of 200 or 300 acres (1.2 km2), a house, and perhaps some livestock. The vestry paid him an annual salary of 16,000 pounds-of-tobacco, plus 20 shillings for every wedding and funeral. While not poor, the priests lived modestly and their opportunities for improvement were slim.
Religious leaders in England felt they had a duty as missionaries to bring Christianity (or more specifically, the religious practices and beliefs of the Church of England), to the Native Americans. There was an assumption that their own "mistaken" spiritual beliefs were largely the result of a lack of education and literacy, since the Powhatan did not have a written language. Therefore, teaching them these skills would logically result in what the English saw as "enlightenment" in their religious practices, and bring them into the fold of the church, which was part of the government, and hence, a form of control.
The efforts to educate and convert the natives were minimal, though the Indian school remained open until the Revolution. Apart from the Nansemond tribe, which had converted in 1638, and a few isolated individuals over the years, the other Powhatan tribes as a whole did not fully convert to Christianity until 1791.
Alternatives to the established church
The colonists were typically inattentive, uninterested, and bored during church services, according to the ministers, who complained that the people were sleeping, whispering, ogling the fashionably dressed women, walking about and coming and going, or at best looking out the windows or staring blankly into space. The lack of towns means the church had to serve scattered settlements, while the acute shortage of trained ministers meant that piety was hard to practice outside the home. Some ministers solved their problems by encouraged parishioners to become devout at home, using the Book of Common Prayer for private prayer and devotion (rather than the Bible). This allowed devout Anglicans to lead an active and sincere religious life apart from the unsatisfactory formal church services. However the stress on private devotion weakened the need for a bishop or a large institutional church of the sort Blair wanted. The stress on personal piety opened the way for the First Great Awakening, which pulled people away from the established church.
Especially in the back country, most families had no religious affiliation whatsoever and their low moral standards were shocking to proper Englishmen The Baptists, Methodists, Presbyterians and other evangelicals directly challenged these lax moral standards and refused to tolerate them in their ranks. The evangelicals identified as sinful the traditional standards of masculinity which revolved around gambling, drinking, and brawling, and arbitrary control over women, children, and slaves. The religious communities enforced new standards, creating a new male leadership role that followed Christian principles and became dominant in the 19th century. Baptists, German Lutherans and Presbyterians, funded their own ministers, and favored disestablishment of the Anglican church.
The First Great Awakening impacted the area in the 1740s, leading Samuel Davies to be sent from Pennsylvania in 1747 to lead and minister to religious dissenters in Hanover County, Virginia. He eventually helped found the first presbytery in Virginia (the Presbytery of Hanover), evangelized slaves (remarkable in its time,), and influenced young Patrick Henry who traveled with his mother to listen to sermons. The Presbyterians were evangelical dissenters, mostly Scots-Irish Americans who expanded in Virginia between 1740 and 1758, immediately before the Baptists. Spangler (2008) argues they were more energetic and held frequent services better atuned to the frontier conditions of the colony. Presbyterianism grew in frontier areas where the Anglicans had made little impress, especially the western areas of the Piedmont and the valley of Virginia. Uneducated whites and blacks were attracted to the emotional worship of the denomination, its emphasis on biblical simplicity, and its psalm singing. Presbyterians were a cross-section of society; they were involved in slaveholding and in patriarchal ways of household management, while the Presbyterian Church government featured few democratic elements. Some local Presbyterian churches, such as Briery in Prince Edward County owned slaves. The Briery church purchased five slaves in 1766 and raised money for church expenses by hiring them out to local planters.
Helped by the First Great Awakening and numerous itinerant self-proclaimed missionaries, by the 1760s Baptists were drawing Virginians, especially poor white farmers, into a new, much more democratic religion. Slaves were welcome at the services and many became Baptists at this time. Baptist services were highly emotional; the only ritual was baptism, which was applied by immersion (not sprinkling like the Anglicans) only to adults. Opposed to the low moral standards prevalent in the colony, the Baptists strictly enforced their own high standards of personal morality, with special concern for sexual misconduct, heavy drinking, frivolous spending, missing services, cursing, and revelry. Church trials were held frequently and members who did not submit to disciple were expelled.
Historians have debated the implications of the religious rivalries for the American Revolution. The Baptist farmers did introduce a new egalitarian ethic that largely displaced the semi-aristocratic ethic of the Anglican planters. However, both groups supported the Revolution. There was a sharp contrast between the austerity of the plain-living Baptists and the opulence of the Anglican planters, who controlled local government. Baptist church discipline, mistaken by the gentry for radicalism, served to ameliorate disorder. As population became more dense, the county court and the Anglican Church were able to increase their authority. The Baptists protested vigorously; the resulting social disorder resulted chiefly from the ruling gentry's disregard of public need. The vitality of the religious opposition made the conflict between 'evangelical' and 'gentry' styles a bitter one. The strength of the evangelical movement's organization determined its ability to mobilize power outside the conventional authority structure. The struggle for religious toleration erupted and was played out during the American Revolution, as the Baptists, in alliance with Anglicans Thomas Jefferson and James Madison worked successfully to disestablish the Anglican church.
Methodist missionaries were also active in the late colonial period. From 1776 to 1815 Methodist Bishop Francis Asbury made 42 trips into the western parts to visit Methodist congregations. He preached at Benns Methodist Church, near Smithfield, Virginia in 1804. Methodists encouraged an end to slavery, and welcomed free blacks and slaves into active roles in the congregations. Like the Baptists, Methodists made conversions among slaves and free blacks, and provided more of a welcome to them than in the Anglican Church. Some blacks were selected as preachers. During the Revolutionary War, about 700 Methodist slaves sought freedom behind British lines. The British transported them and other Black Loyalists, as they were called, for resettlement to its colony of Nova Scotia. In 1791 Britain helped some of the Black Loyalists, who had encountered racism among other Loyalists, and problems with the climate and land given to them, to resettle in Sierra Leone in Africa.
Following the Revolution, in the 1780s, itinerant Methodist preachers carried copies of an anti-slavery petition in their saddlebags throughout the state, calling for an end to slavery. In addition, they encouraged slaveholders to manumit their slaves. So many slaveholders did so that the proportion of free blacks in Virginia in the first two decades after the Revolutionary War increased to 7.3 percent of the population, from less than one percent. At the same time, counter-petitions were circulated. The petitions were presented to the Assembly; they were debated, but no legislative action was taken, and after 1800 there was gradually reduced religious opposition to slavery as it had renewed economic importance after invention of the cotton gin.
Religious freedom and disestablishment
The Baptists and Presbyterians were subject to many legal constraints and faced growing persecution; between 1768 and 1774, about half of the Baptists ministers in Virginia were jailed for preaching. In 1689, the Act of Toleration had allowed freedom of worship. At the start of the Revolution, the Anglican Patriots realized that they needed dissenter support for effective wartime mobilization, so they met most of the dissenters' demands in return for their support of the war effort.
After the united colonies' victory at Yorktown, the Anglican establishment sought to reintroduce state support for religion. This effort failed when non-Anglicans gave their support to Thomas Jefferson's "Bill for Establishing Religious Freedom", which eventually became law in 1786. With freedom of religion the new watchword, the Church of England was dis-established in Virginia. Most ministers were Loyalists and returned to England. When possible, worship continued in the usual fashion, but the local vestry no longer distributed tax money or had local government functions such as poor relief. The Right Reverend James Madison (1749–1812), a cousin of Patriot James Madison, was appointed in 1790 as the first Episcopal Bishop of Virginia and he slowly rebuilt the denomination within freedom of choice of belief and worship.
Revolutionary sentiments first began appearing in Virginia shortly after the French and Indian War ended in 1763. The very same year, the British and Virginian governments clashed in the case of Parson's Cause. The Virginia legislature had passed the Two-Penny Act to stop clerical salaries from inflating. King George III vetoed the measure, and clergy sued for back salaries. Patrick Henry first came to prominence by arguing in the case against the veto, which he declared tyrannical.
The British government had accumulated a great deal of debt through spending on its wars. To help payoff this debt, Parliament passed the Sugar Act in 1764 and the Stamp Act in 1765. The General Assembly opposed the passage of the Sugar Act on the grounds of no taxation without representation. Patrick Henry opposed the Stamp Act in the Burgesses with a famous speech advising George III that "Caesar had his Brutus, Charles I his Cromwell..." and the king "may profit by their example." The legislature passed the "Virginia Resolves" opposing the tax. Governor Francis Fauquier responded by dismissing the Assembly.
Opposition continued after the resolves. The Northampton County court overturned the Stamp Act February 8, 1766. Various political groups, including the Sons of Liberty met and issued protests against the act. Most notably, Richard Bland published a pamphlet entitled An Enquiry into the Rights of Ike British Colonies. This document would set one of the basic political principles of the Revolution by stating that Virginia was a part of the British Empire, not the Kingdom of Great Britain, so it only owed allegiance to the Crown, not Parliament.
The Stamp Act was repealed, but additional taxation from the Revenue Act and the 1769 attempt to transport Bostonian rioters to London for trial incited more protest from Virginia. The Assembly met to consider resolutions condemning on the transport of the rioters, but Governor Botetourt, while sympathetic, dissolved the legislature. The Burgesses reconvened in Raleigh Tavern and made an agreement to ban British imports. Britain gave up the attempt to extradite the prisoners and lifted all taxes except the tax on tea in 1770.
In 1773, because of a renewed attempt to extradite Americans to Britain, Richard Henry Lee, Thomas Jefferson, Patrick Henry, George Mason, and others created a committee of correspondence to deal with problems with Britain. Unlike other such committees of correspondence, this one was an official part of the legislature.
Following the closure of the port in Boston and several other offenses, the Burgesses approved June 1, 1774 as a day of "Fasting, Humiliation, and Prayer" in a show of solidarity with Massachusetts. The Governor, Lord Dunmore, dismissed the legislature. The first Virginia Convention was held August 1–6 to respond to the growing crisis. The convention approved a boycott of British goods, expressed solidarity with Massachusetts, and elected delegates to the Continental Congress where Virginian Peyton Randolph was selected as president of the Congress.
On April 20, 1775, a day after the Battle of Lexington and Concord, Dunmore ordered royal marines to remove the gunpowder from the Williamsburg Magazine to a British ship. Patrick Henry led a group of Virginia militia from Hanover in response to Dunmore's order. Carter Braxton negotiated a resolution to the Gunpowder Incident by transferring royal funds as payment for the powder. The incident exacerbated Dunmore's declining popularity. He fled the Governor's Palace to the British ship Fowey at Yorktown. On November 7, Dunmore issued a proclamation declaring Virginia was in a state of rebellion and that any slave fighting for the British would be freed. By this time, George Washington had been appointed head of the American forces by the Continental Congress and Virginia was under the political leadership of a Committee of Safety formed by the Third Virginia Convention in the governor's absence.
On December 9, 1775, Virginia militia moved on the governor's forces at the Battle of Great Bridge. The British had held a fort that guarded the land route to Norfolk. The British feared the militia, who had no cannon for a siege, would receive reinforcements, so they abandoned the fort and attacked. The militia won the 30-minute battle. Dunmore responded by bombarding Norfolk with his ships on January 1, 1776.
The Fifth Virginia Convention met on May 6 and declared Virginia a free and independent state on May 15, 1776. The convention instructed its delegates to introduce a resolution for independence at the Continental Congress. Richard Henry Lee introduced the measure on June 7. While the Congress debated, the Virginia Convention adopted George Mason's Bill of Rights (June 12) and a constitution (June 29) which established an independent commonwealth. Congress approved Lee's proposal on July 2 and approved Jefferson's Declaration of Independence on July 4.
The constitution of the Fifth Virginia Convention created a system of government for the state that would last for 54 years. The constitution provided for a chief magistrate, a bicameral legislature with both the House of Delegates and the Senate. The legislature elected a governor each year (picking Patrick Henry to be the first) and a council of eight for executive functions. In October, the legislature appointed Jefferson, Edmund Pendleton, and George Wythe to adopt the existing body of Virginia law to the new constitution.
After the Battle of Great Bridge, little military conflict took place on Virginia soil for the first part of the American Revolutionary War. Nevertheless, Virginia sent forces to help in the fighting to the North and South, including Daniel Morgan and his company of marksmen who fought in early battles in the north. Charlottesville served as a prison camp for the Convention Army, Hessian and British soldiers captured at Saratoga. Virginia also sent forces to its frontier in the northwest, which then included much of the Ohio Country. George Rogers Clark led forces in this area and captured the fort at Kaskaskia and won the Battle of Vincennes, capturing the royal governor, Henry Hamilton. Clark maintained control of areas south of the Ohio River for most of the war, but was unable to make gains in the Indian-dominated territories north of the river.
War returns to Virginia
The British brought the war back to coastal Virginia in May 1779 when Admiral George Collier landed troops at Hampton Roads and used Portsmouth (after destroying the naval yard) as a base of attack. The move was part of an attempted blockade of trade with the West Indies. The British abandoned the plan when reinforcements from General Henry Clinton failed to arrive to support Collier.
Fearing the vulnerability of Williamsburg, then-Governor Thomas Jefferson moved the capital farther inland to Richmond in 1780. That October, the British made another attempt at invading Virginia. British General Alexander Leslie entered the Chesapeake with 2,500 troops and used Portsmouth as a base; however, after the British defeat at the Battle of Kings Mountain, Leslie moved to join General Charles Cornwallis farther south. In December, Benedict Arnold, who had betrayed the Revolution and become a general for the British, attacked Richmond with 1,000 soldiers and burned part of the city before the Virginia Miltia drove his army out of the city. Arnold moved his base of operations to Portsmouth and was later joined by another 2,000 troops under General William Phillips. Phillips led an expedition that destroyed military and economic targets, against ineffectual militia resistance. The state's defenses, led by General Friedrich Wilhelm, Baron von Steuben, put up resistance in the April 1781 Battle of Blandford, but was forced to retreat.
George Washington sent the French General Lafayette to lead the defense of Virginia. Lafayette marched south to Petersburg, preventing Phillips from immediately taking the town. Cornwallis, frustrated in the Carolinas, moved up from North Carolina to join Phillips and Arnold, and began to pursue Lafayette's smaller force. Lafayette only had 3,200 troops to face Cornwallis's 7,200. The outnumbered Lafayette avoided direct confrontation and could do little more than annoy Cornwallis with a series of skirmishes. Lafayette retreated to Fredericksburg, met up with General Anthony Wayne, and then marched into the southwest. Cornwallis dispatched two smaller missions: 500 soldiers under Colonel John Graves Simcoe to take the arsenal at Point of Fork and 250 under Colonel Banastre Tarleton to march on Charlottesville and capture Gov. Jefferson and the legislature. The expedition to Point of Fork forced Steuben to retreat further while Tarleton's mission captured only seven legislators and some officers. Jack Jouett had ridden all night ride to warn Jefferson and the legislators of Tarleton's coming. Cornwallis reunited his army in Elk Hill and marched to the Tidewater region. Lafayette, uniting with von Steuben, now had 5,000 troops and followed Cornwallis.
Under orders from General Clinton, Cornwallis moved down the Virginia Peninsula towards the Chesapeake Bay were Clinton planned to extract part of the army for a siege of New York City. Cornwallis passed through Williamsburg and near Jamestown. When Cornwallis appeared to be moving to cross the James River, Lafayette saw a chance to attack Cornwallis during the crossing, and sent 800 troops under General Wayne against what they believed to be Cornwallis' rear guard. Cornwallis had set a trap, and Wayne was very nearly caught by the much larger, 5,000 soldier, main body of Cornwallis' forces at the Battle of Green Spring on July 6, 1781. Wayne ordered a charge against Cornwallis in order to feign greater strength and stop the British advance. Casualties were light with the Americans losing 140 and the British 75, but the ploy allowed the Americans to escape.
Cornwallis moved his troops across the James to Portsmouth to await Clinton's orders. Clinton decided that a position on the peninsula must be held and that Yorktown would be a valuable naval base. Cornwallis received orders to move his troops to Yorktown and begin construction of fortifications and a naval yard. The Americans had initially expected Cornwallis to move either to New York or the Carolinas and started to make arrangements to move from Virginia. Once they discovered the fortifications at Yorktown, the Americans began to place themselves around the city. Gen. Washington saw the opportunity for a major victory. He moved a portion of his troops, along with Rochambeau's French troops, from New York to Virginia. The plan hinged on French reinforcements of 3,200 troops and a large naval force under the Admiral de Grasse. On September 5, Admiral de Grasse defeated a fleet of the Royal Navy at the Battle of the Virginia Capes. The defeat ensured French dominance of the waters around Yorktown, thereby preventing Cornwallis from receiving troops or supplies and removing the possibility of evacuation. Between October 6 and 17 the American forces laid siege to Yorktown. Outgunned and completely surrounded, Cornwallis decided to surrender. Papers for surrender were officially signed on October 19. As a result of the defeat, the king lost control of Parliament and the new British government offered peace in April 1782. The Treaty of Paris of 1783 officially ended the war.
Early Republic and antebellum periods
Victory in the Revolution brought peace and prosperity to the new state, as export markets in Europe reopened for its tobacco.
While the old local elites were content with the status quo, younger veterans of the war had developed a national identity. Led by George Washington and James Madison, Virginia played a major role in the Constitutional Convention of 1787 in Philadelphia. Madison proposed the Virginia Plan, which would give representation in Congress according to total population, including a proportion of slaves. Virginia was the most populous state, and it was allowed to count all of its white residents and 3/5 of the enslaved African Americans for its congressional representation and its electoral vote. (Only white men who owned a certain amount of property could vote.) Ratification was bitterly contested; the pro-Constitution forces prevailed only after promising to add a Bill of Rights. The Virginia Ratifying Convention approved the Constitution by a vote of 89–79 on June 25, 1788, making it the tenth state to enter the Union.
Madison played a central role in the new Congress, while Washington was the unanimous choice as first president. He was followed by the Virginia Dynasty, including Thomas Jefferson, Madison, and James Monroe, giving the state four of the first five presidents.
Slavery and freedmen in Antebellum Virginia
The Revolution meant change and sometimes political freedom for enslaved African Americans, too. Tens of thousands of slaves from southern states escaped to British lines and freedom during the war. Thousands left with the British for resettlement in their colonies of Nova Scotia and Jamaica; others went to England; others disappeared into rural and frontier areas or the North.
Inspired by the Revolution and evangelical preachers, numerous slaveholders in the Chesapeake region manumitted some or all of their slaves, during their lifetimes or by will. From 1,800 persons in 1782, the total population of free blacks in Virginia increased to 12,766 (4.3 percent of blacks) in 1790, and to 30,570 in 1810; the percentage change was from free blacks' comprising less than one percent of the total black population in Virginia, to 7.2 percent by 1810, even as the overall population increased. One planter, Robert Carter III freed more than 450 slaves in his lifetime, more than any other planter. George Washington freed all of his slaves at his death.
Many free blacks migrated from rural areas to towns such as Petersburg, Richmond, and Charlottesville for jobs and community; others migrated with their families to the frontier where social strictures were more relaxed. Among the oldest black Baptist congregations in the nation were two founded near Petersburg before the Revolution. Each moved into the city and built churches by the early 19th century.
Twice slave rebellions broke out in Virginia: Gabriel's Rebellion in 1800, and Nat Turner's Rebellion in 1831. White reaction was swift and harsh, and militias killed many innocent free blacks and black slaves as well as those directly involved in the rebellions. After the second rebellion, the legislature passed laws restricting the rights of free people of color: they were excluded from bearing arms, serving in the militia, gaining education, and assembling in groups. As bearing arms and serving in the militia were considered obligations of free citizens, free blacks came under severe constraints after Nat Turner's rebellion.
Beginning in the 1750s, the Ohio Company of Virginia was created to survey and settle its new lands. Following the French and Indian War, westward settlement by Virginians was limited to more southern portions of the American Old West. In 1784 Virginia relinquished its claims to the Northwest Territory, except for the Virginia Military District. It was given to veterans of the Revolutionary War. In 1792, three western counties formed Kentucky.
As the new nation of the United States of America experienced growing pains and began to speak of Manifest Destiny, Virginia, too, found its role in the young republic to be changing and challenging. Beginning with the Louisiana Purchase, many of the Virginians whose grandparents had created the Virginia Establishment began to expand westward. Famous Virginian-born Americans affected not only the destiny of the state of Virginia, but the rapidly developing American Old West. Virginians Meriwether Lewis and William Clark were influential in their famous expedition to explore the Missouri River and possible connections to the Pacific Ocean. Notable names such as Stephen F. Austin, Edwin Waller, Haden Harrison Edwards, and Dr. John Shackelford were famous Texan pioneers from Virginia. Even eventual Civil War general Robert E. Lee distinguished himself as a military leader in Texas during the 1846–1848 Mexican-American War.
As the western reaches of Virginia were developed in the first half of the 19th century, the vast differences in the agricultural basis, cultural, and transportation needs of the area became a major issue for the Virginia General Assembly. In the older, eastern portion, slavery contributed to the economy. While planters were moving away from labor-intensive tobacco to mixed crops, they still held numerous slaves and their leasing out or sales was also part of their economic prospect. Slavery had become an economic institution upon which planters depended. Watersheds on most of this area eventually drained to the Atlantic Ocean. In the western reaches, families farmed smaller homesteads, mostly without enslaved or hired labor. Settlers were expanding the exploitation of resources: mining of minerals and harvesting of timber. The land drained into the Ohio River Valley, and trade followed the rivers.
Representation in the state legislature was heavily skewed in favor of the more populous eastern areas and the historic planter elite. This was compounded by the partial allowance for slaves when counting population; as neither the slaves nor women had the vote, this gave more power to white men. The legislature's efforts to mediate the disparities ended without meaningful resolution, although the state held a constitutional convention on representation issues. Thus, at the outset of the American Civil War, Virginia was caught not only in national crisis, but in a long-standing controversy within its own boundaries. While other border states had similar regional differences, Virginia had a long history of east-west tensions which finally came to a head; it was the only state to divide into two separate states during the War.
Infrastructure and Industrial Revolution
After the Revolution, various infrastructure projects began to be developed, including the Dismal Swamp Canal, the James River and Kanawha Canal, and various turnpikes. Virginia was home to the first of all Federal infrastructure projects under the new Constitution, the Cape Henry Light of 1792, located at the mouth of the Chesapeake Bay. Following the War of 1812, several Federal national defense projects were undertaken in Virginia. Drydock Number One was constructed in Portsmouth in the 1827. Across the James River, Fort Monroe was built to defend Hampton Roads, completed in 1834.
In the 1830s, railroads began to be built in Virginia. In 1831, the Chesterfield Railroad began hauling coal from the mines in Midlothian to docks at Manchester (near Richmond), powered by gravity and draft animals. The first railroad in Virginia to be powered by locomotives was the Richmond, Fredericksburg and Potomac Railroad, chartered in 1834, with the intent to connect with steamboat lines at Aquia Landing running to Washington, D.C.. Soon after, others (with equally descriptive names) followed: the Richmond and Petersburg Railroad and Louisa Railroad in 1836, the Richmond and Danville Railroad in 1847, the Orange and Alexandria Railroad in 1848, and the Richmond and York River Railroad. In 1849, the Virginia Board of Public Works established the Blue Ridge Railroad. Under Engineer Claudius Crozet, the railroad successfully crossed the Blue Ridge Mountains via the Blue Ridge Tunnel at Afton Mountain.
With extensive iron deposits, especially in the western counties, Virginia was a pioneer in the iron industry. The first ironworks in the new world was established at Falling Creek in 1619, though it was destroyed in 1622. There would eventually grow to be 80 ironworks, charcoal furnaces and forges with 7,000 hands at any one time, about 70 percent of them slaves. Ironmasters hired slaves from local slave owners because they were cheaper than white workers, easier to control, and could not switch to a better employer. But the work ethic was weak, because the wages went to the owner, not to the workers, who were forced to work hard, were poorly fed and clothed, and were separated from their families. Virginia's industry increasingly fell behind Pennsylvania, New Jersey and Ohio, which relied on free labor. Bradford (1959) recounts the many complaints about slave laborers and argues the over-reliance upon slaves contributed to the failure of the ironmasters to adopt improved methods of production for fear the slaves would sabotage them. Most of the blacks were unskilled manual laborers, although Lewis (1977) reports that some were in skilled positions.
Virginia began a convention about secession on February 13, 1861 after six states seceded to form the Confederate States of America on February 4. Unionist members blocked secession but, on April 15 Lincoln called for troops from all states still in the Union in response to the firing on Fort Sumter. That meant Federal troops crossing Virginia on the way south to subdue South Carolina. On April 17, 1861 the convention voted to secede. The Confederacy rewarded the state by moving the national capital from Montgomery, Alabama to Richmond in late May—a decision that exposed the Confederate capital to unrelenting attacks and made Virginia a continuous battleground. Virginians ratified the articles of secession on May 23. The following day, the Union army moved into northern Virginia and captured Alexandria without a fight, and controlled it for the remainder of the war.
The first major battle of the Civil War occurred on July 21, 1861. Union forces attempted to take control of the railroad junction at Manassas, but the Confederate Army had moved its forces by train to meet the Union. The Confederates won the First Battle of Manassas (known as "Bull Run" in Northern naming convention). Both sides mobilized for war; the year went on without another major fight.
Men from all economic and social levels, both slaveholders and nonslaveholders, as well as former Unionists, enlisted in great numbers. The only areas that sent few or no men to fight for the Confederacy had few slaves, a high percentage of poor families, and a history of opposition to secession, were located on the border with the North, and were sometimes under Union control.
Richmond and war industry
After Virginia joined in secession, the capital of the Confederate States of American was relocated from Montgomery, Alabama, to Richmond. The White House of the Confederacy, located a few blocks north of the State Capital, was home to the family of Confederate President Jefferson Davis. A major center of iron production during the civil war was located in Richmond at Tredegar Iron Works. Tredegar was run partially by slave labor, and it produced most of the artillery for the war, making Richmond an important point to defend.
Petersburg became a manufacturing center, as well as a city where free black artisans and craftsmen could make a living. In 1860 half its population was black and of that, one-third were free blacks, the largest such population in the state. Richmond and Petersburg were linked by railroad before the Civil War, and the latter was an important shipping point for goods. Saltville was a primary source of Confederate salt (critical for food preservation and thus feeding the military) during the war, leading to the two Battles of Saltville. The most industrialized area of Virginia, around Wheeling, stayed loyal to the Union.
West Virginia breaks away
The western counties could not tolerate the Confederacy. Breaking away, they first formed the Union state of Virginia (recognized by Washington); it is called the Restored government of Virginia. The Restored government did little except give its permission for Congress to form the new state of West Virginia in 1862.
At the Richmond secession convention on April 17, 1861, the delegates from western counties were 17 in favor and 30 against secession. From May to August 1861, a series of Unionist conventions met in Wheeling; the Second Wheeling Convention constituted itself as a legislative body called the Restored Government of Virginia. It declared Virginia was still in the Union but that the state offices were vacant and elected a new governor, Francis H. Pierpont; this body gained formal recognition by the Lincoln administration on July 4. On August 20 the Wheeling body passed an ordinance for the creation; it was put to public vote on Oct. 24. The vote was in favor of a new state—West Virginia—which was distinct from the Pierpont government, which persisted until the end of the war. Congress and Lincoln approved, and, after providing for gradual emancipation of slaves in the new state constitution, West Virginia became the 35th state on June 20, 1863.
During the War, West Virginia contributed about 32,000 soldiers to the Union Army and about 10,000 to the Confederate cause. The government in Richmond did not recognize the new state, and Confederates did not vote there. Everyone realized the decision would be made on the battlefield, and the government in Richmond sent in Robert E. Lee. But Lee found little local support and was defeated by Union forces from Ohio. Union victories in 1861 drove the Confederate forces out of the Monongahela and Kanawha valleys, and throughout the remainder of the war the Union held the region west of the Alleghenies and controlled the Baltimore and Ohio Railroad in the north. The new state was not subject to Reconstruction.
Later war years
|This section requires expansion. (March 2011)|
For the remainder of the war, battles were fought across Virginia, including the Seven Days Battles, the Battle of Fredericksburg, the Battle of Chancellorsville, the Battle of Brandy Station, the Overland Campaign and the Battle of the Wilderness, culminating in the Siege of Petersburg. In April 1865, Richmond was burned by a retreating Confederate Army and was returned to Union control. The Confederate government fled southwest to Danville, with the Army of Northern Virginia following. Days later, the Army of Northern Virginia surrendered at Appomattox.
Virginia had been devastated by the war, with the infrastructure (such as railroads) in ruins; many plantations burned out; and large numbers of refugees without jobs, food or supplies beyond rations provided by the Union Army, especially its Freedman's Bureau.
Historian Mary Farmer-Kaiser reports that white landowners complained to the Bureau about unwillingness of freedwomen to work in the fields as evidence of their laziness, and asked the Bureau to force them to sign labor contracts. In response, many Bureau officials "readily condemned the withdrawal of freedwomen from the work force as well as the 'hen pecked' husbands who allowed it." While the Bureau did not force freedwomen to work, it did force freedmen to work or be arrested as vagrants. Furthermore, agents urged poor unmarried mothers to give their older children up as apprentices to work for white masters. Farmer-Kaiser concludes that "Freedwomen found both an ally and an enemy in the bureau."
There were three phases in Virginia's Reconstruction era: wartime, presidential, and congressional. Immediately after the war President Andrew Johnson recognized the Francis Harrison Pierpont government as legitimate and restored local government. The Virginia legislature passed Black Codes that severely restricted Freedmen's mobility and rights; they had only limited rights and were not considered citizens, nor could they vote. The state ratified the 13th amendment to abolish slavery and revoked the 1861 ordnance of secession. Johnson was satisfied that Reconstruction was complete.
Other Republicans in Congress refused to seat the newly elected state delegation; the Radicals wanted better evidence that slavery and similar methods of serfdom had been abolished, and the freedmen given rights of citizens. They also were concerned that Virginia leaders had not renounced Confederate nationalism. After winning large majorities in the 1866 national election, the Radical Republicans gained power in Congress. They put Virginia (and nine other ex-Confederate states) under military rule. Virginia was administered as the "First Military District" in 1867–69 under General John Schofield Meanwhile the Freedmen became politically active by joining the pro-Republican Union League, holding conventions, and demanding universal male suffrage and equal treatment under the law, as well as demanding disfranchisement of ex-Confederates and the seizure of their plantations. McDonough, finding that Schofield was criticized by conservative whites for supporting the Radical cause on the one hand, and attacked on the other by Radicals for thinking black suffrage was premature on the other, concludes that "he performed admirably' by following a middle course between extremes.
Increasingly a deep split opened up in the republican ranks. The moderate element had national support and called itself "True Republicans." The more radical element set out to disfranchise whites--such as not allowing a man to hold office if he was a private in the Confederate army, or had sold food to the Confederate government, plus land reform. In 1867 radical James Hunnicutt (1814–1880), a white preacher, editor and Scalawag (white Southerners supporting Reconstruction) mobilized the black Republican vote by calling for the confiscation of all plantations and turning the land over to Freedmen and poor whites. The "True Republicans" (the moderates), led by former Whigs, businessmen and planters, while supportive of black suffrage, drew the line at property confiscation. A compromise was reached calling for confiscation if the planters tried to intimidate black voters. Hunnicutt's coalition took control of the Republican Party, and began to demand the permanent disfranchisement of all whites who had supported the Confederacy. The Virginia Republican party became permanently split, and many moderate Republicans switched to the opposition "Conservatives". The Radicals won the 1867 election for delegates to a constitutional convention.
The 1868 constitutional convention included 33 white Conservatives, and 72 Radicals (of whom 24 were Blacks, 23 Scalawag, and 21 Carpetbaggers. Called the "Underwood Constitution" after the presiding officer, the main accomplishment was to reform the tax system, and create a system of free public schools for the first time in Virginia. After heated debates over disfranchising Confederates, the Virginia legislature approved a Constitution that excluded ex-Confederates from holding office, but allowed them to vote in state and federal elections.
Under pressure from national Republicans to be more moderate, General Schofield continued to administer the state through the Army. He appointed a personal friend, Henry H. Wells as provisional governor. Wells was a Carpetbagger and a former Union general. Schofield and Wells fought and defeated Hunnicutt and the Scalawag Republicans. They took away contracts for state printing orders from Hunnicutt's newspaper. The national government ordered elections in 1869 that included a vote on the new Underwood constitution, a separate one on its two disfranchisement clauses that would have permanently stripped the vote from most former rebels, and a separate vote for state officials. The Army enrolled the Freedmen (ex-slaves) as voters but would not allow some 20,000 prominent whites to vote or hold office. The Republicans nominated Wells for governor, as Hunnicutt and most Scalawags went over to the opposition.
The leader of the moderate Republicans, calling themselves "True Republicans," was William Mahone (1826–1895), a railroad president and former Confederate general. He formed a coalition of white Scalawag Republicans, some blacks, and ex-Democrats who formed the Conservative Party. Mahone recommended that whites had to accept the results of the war, including civil rights and the vote for Freedmen. Mahone convinced the Conservative Party to drop its own candidate and endorse Gilbert C. Walker, Mahone's candidate for governor. In return, Mahone's people endorsed Conservatives for the legislative races. Mahone's plan worked, as the voters in 1869 elected Walker and defeated the proposed disfranchisement of ex-Confederates.
When the new legislature ratified the 14th and 15th amendments to the U.S. Constitution, Congress seated its delegation, and Virginia Reconstruction came to an end in January 1870. The Radical Republicans had been ousted in a non-violent election. Virginia was the only southern state that did not elect a civilian government that represented more Radical Republican principles. Suffering from widespread destruction and difficulties in adapting to free labor, white Virginians generally came to share the postwar bitterness typical of the southern attitudes.
Railroad and industrial growth
In addition to those that were rebuilt, new railroads developed after the Civil War. In 1868, under railroad baron Collis P. Huntington, the Virginia Central Railroad was merged and transformed into the Chesapeake and Ohio Railroad. In 1870, several railroads were merged to form the Atlantic, Mississippi and Ohio Railroad, later renamed Norfolk & Western. In 1880, the towpath of the now-defunct James River & Kanawha canal was transformed into the Richmond and Allegheny Railroad, which within a decade would merge into the Chesapeake & Ohio. Others would include the Southern Railroad, the Seaboard Air Line, and the Atlantic Coast Line; still others would eventually reach into Virginia, including the Baltimore & Ohio and the Pennsylvania Railroad. The rebuilt Richmond, Fredericksburg, and Potomac Railroad eventually was linked to Washington, D.C..
In the 1880s, the Pocahontas Coalfield opened up in far southwest Virginia, with others to follow, in turn providing more demand for railroads transportation. In 1909, the Virginian Railway opened, built for the express purpose of hauling coal from the mountains of West Virginia to the ports at Hampton Roads. The growth of railroads resulted in the creation of new towns and rapid growth of others, including Clifton Forge, Roanoke, Crewe and Victoria. The railroad boom was not without incident: the Wreck of the Old 97 occurred en route from Danville to North Carolina in 1903, later immortalized by a popular ballad.
With the invention of the cigarette rolling machine, and the great increase in smoking in the early twentieth century, cigarettes and other tobacco products became a major industry in Richmond and Petersburg. Tobacco magnates such as Lewis Ginter funded a number of public institutions.
Readjustment, public education, segregation
A division among Virginia politicians occurred in the 1870s, when those who supported a reduction of Virginia's pre-war debt ("Readjusters") opposed those who felt Virginia should repay its entire debt plus interest ("Funders"). Virginia's pre-war debt was primarily for infrastructure improvements overseen by the Virginia Board of Public Works, much of which were destroyed during the war or in the new State of West Virginia.
After his unsuccessful bid for the Democratic nomination for governor in 1877, former confederate General and railroad executive William Mahone became the leader of the "Readjusters", forming a coalition of conservative Democrats and white and black Republicans. The so-called Readjusters aspired "to break the power of wealth and established privilege" and to promote public education. The party promised to "readjust" the state debt in order to protect funding for newly-established public education, and allocate a fair share to the new State of West Virginia. Its proposal to repeal the poll tax and increase funding for schools and other public facilities attracted biracial and cross-party support.
The Readjuster Party was successful in electing its candidate, William E. Cameron as governor, and he served from 1882 to 1886. Mahone served as a Senator in the U.S. Congress from 1881 to 1887, as well as fellow Readjustor Harrison H. Riddleberger, who served in the U.S. Senate from 1883 to 1889. Readjusters' effective control of Virginia politics lasted until 1883, when they lost majority control in the state legislature, followed by the election of Democrat Fitzhugh Lee as governor in 1885. The Virginia legislature replaced both Mahone and Riddleberger in the U.S. Senate with Democrats.
In 1888 the exception to Readjustor and Democratic control was John Mercer Langston, who was elected to Congress from the Petersburg area on the Republican ticket. He was the first black elected to Congress from the state, and the last for nearly a century. He served one term. A talented and vigorous politician, he was an Oberlin College graduate. He had long been active in the abolitionist cause in Ohio before the Civil War, had been president of the National Equal Rights League from 1864 to 1868, and had headed and created the law department at Howard University, and acted as president of the college. When elected, he was president of what became Virginia State University.
While the Readjustor Party faded, the goal of public education remained strong, with institutions established for the education of schoolteachers. In 1884, the state acquired a bankrupt women's college at Farmville and opened it as a normal school. Growth of public education led to the need for additional teachers. In 1908, two additional normal schools were established, one at Fredericksburg and one at Harrisonburg, and in 1910, one at Radford.
After the Readjuster Party disappeared, Virginia Democrats rapidly passed legislation and constitutional amendments that effectively disfranchised African Americans and many poor whites, through the use of poll taxes and literacy tests. They created white, one-party rule under the Democratic Party for the next 80 years. White state legislators passed statutes that restored white supremacy through imposition of Jim Crow segregation. In 1902 Virginia passed a new constitution that reduced voter registration.
The Progressive Era after 1900 brought numerous reforms, designed to modernize the state, increase efficiency, apply scientific methods, promote education and eliminate waste and corruption.
A key leader was Governor Claude Swanson (1906–10), a Democrat who left machine politics behind to win office using the new primary law. Swanson's coalition of reformers in the legislature, built schools and highways, raised teacher salaries and standards, promoted the state's public health programs, and increased funding for prisons. Swanson fought against child labor, lowered railroad rates and raised corporate taxes, while systematizing state services and introducing modern management techniques. The state funded a growing network of roads, with much of the work done by black convicts in chain gangs. After Swanson moved to the U.S. Senate in 1910 he promoted Progressivism at the national level as a supporter of President Woodrow Wilson, who had been born in Virginia and was considered a native son. Swanson, as a power on naval affairs, promoted the Norfolk Navy Yard and Newport News Ship Building and Drydock Corporation. Swanson's statewide organization evolved into the "Byrd Organization."
The State Corporation Commission (SCC) was formed as part of the 1902 Constitution, over the opposition of the railroads, to regulate railroad policies and rates. The SCC was independent of parties, courts, and big businesses, and was designed to maximize the public interest. It became an effective agency, which especially pleased local merchants by keeping rates low.
Virginia has a long history of agricultural reformers, and the Progressive Era stimulated their efforts. Rural areas suffered persistent problems, such as declining populations, widespread illiteracy, poor farming techniques, and debilitating diseases among both farm animals and farm families. Reformers emphasized the need to upgrade the quality of elementary education. With federal help, in they set up a county agent system (today the Virginia Cooperative Extension) that taught farmers the latest scientific methods for dealing with tobacco and other crops, and farm house wives how to maximize their efficiency in the kitchen and nursery.
Some upper-class women, typified by Lila Meade Valentine of Richmond, promoted numerous Progressive reforms, including kindergartens, teacher education, visiting nurses programs, and vocational education for both races. Middle-class white women were especially active in the Prohibition movement. The woman suffrage movement became entangled in racial issues—whites were reluctant to allow black women the vote—and was unable to broaden its base beyond middle-class whites. Virginia women got the vote in 1920, the result of a national constitutional amendment.
In higher education, the key leader was Edwin A. Alderman, president of the University of Virginia, 1904–31. His goal was the transformation of the southern university into a force for state service and intellectual leadership. and educational utility. Alderman successfully professionalized and modernized the state's system of higher education. He promoted international standards of scholarship, and a statewide network of extension services. Joined by other college presidents, he promoted the Virginia Education Commission, created in 1910. Alderman's crusade encountered some resistance from traditionalists, and never challenged the Jim Crow system of segregated schooling.
While the progressives were modernizers, there was also a surge of interest in Virginia traditions and heritage, especially among the aristocratic First Families of Virginia (FFV). The Association for the Preservation of Virginia Antiquities (APVA), founded in Williamsburg in 1889, emphasized patriotism in the name of Virginia's 18th-century Founding Fathers. In 1907, the Jamestown Exposition was held near Norfolk to celebrate the tricentennial of the arrival of the first English colonists and the founding of Jamestown.
Attended by numerous federal dignitaries, and serving as the launch point for the Great White Fleet, the Jamestown Exposition also spurred interest in the military potential of the area. The site of the exposition would later become, in 1917, the location of the Norfolk Naval Station. The proximity to Washington, D.C., the moderate climate, and strategic location of a large harbor at the center of the Atlantic seaboard made Virginia a key location during World War I for new military installations. These included Fort Story, the Army Signal Corps station at Langley, Quantico Marine Base in Prince William County, Fort Belvoir in Fairfax County, Fort Lee near Petersburg and Fort Eustis, in Warwick County (now Newport News). At the same time, heavy shipping traffic made the area a target for U-boats, and a number of merchant vessels were attacked or sunk off the Virginia coast.
|This section requires expansion. (November 2009)|
Temperance became an issue in the early 20th century. In 1916, a statewide referendum passed to outlaw the consumption of alcohol. This was overturned in 1933.
After 1930, tourism began to grow with the development of Colonial Williamsburg.
Shenandoah National Park was constructed from newly gathered land, as well as the Blue Ridge Parkway and Skyline Drive. The Civilian Conservation Corps played a major role in developing that National Park, as well as Pocahontas State Park. By 1940 new highway bridges crossed the lower Potomac, Rappahannock, York, and James Rivers, bringing to an end the long-distance steamboat service which had long served as primary transportation throughout the Chesapeake Bay area. Ferryboats remain today in only a few places.
Blacks comprised a third of the population but lost nearly all their political power. The electorate was so small that from 1905 to 1948 government employees and officeholders cast a third of the votes in state elections. This small, controllable electorate facilitated the formation of a powerful statewide political machine by Harry Byrd (1887–1966), which dominated from the 1920s to the 1960s. Most of the blacks who remained politically active supported the Byrd organization, which in turn protected their right to vote, making Virginia's race relations the most harmonious in the South before the 1950s, according to V.O. Key. Not until Federal civil rights legislation was passed in 1964 and 1965 did African Americans recover the power to vote and the protection of other basic constitutional civil rights.
WWII and Modern era
|This section requires expansion. (March 2011)|
The economic stimulus of World War II brought full employment for workers, high wages, and high profits for farmers. It brought in many thousands of soldiers and sailors for training. Virginia sent 300,000 men and 4,000 women to the services. The buildup for the war greatly increased the state's naval and industrial economic base, as did the growth of federal government jobs in Northern Virginia and adjacent Washington, DC. The Pentagon was built in Arlington as the largest office building in the world. Additional installations were added: in 1941, Fort A.P. Hill and Fort Pickett opened, and Fort Lee was reactivated. The Newport News shipyard expanded its labor force from 17,000 to 70,000 in 1943, while the Radford Arsenal had 22,000 workers making explosives. Turnoverr was very high—in one three-month period the Newport News shipyard hired 8400 new workers as 8,300 others quit.
Cold War and Space Age
In addition to general postwar growth, the Cold War resulted in further growth in both Northern Virginia and Hampton Roads. With the Pentagon already established in Arlington, the newly formed Central Intelligence Agency located its headquarters further afield at Langley (unrelated to the Air Force Base). In the early 1960s, the new Dulles International Airport was built, straddling the Fairfax County-Loudoun County border. Other sites in Northern Virginia included the listening station at Vint Hill. Due to the presence of the U.S. Atlantic Fleet in Norfolk, in 1952 the Allied Command Atlantic of NATO was headquartered there, where it remained for the duration of the Cold War. Later in the 1950s and across the river, Newport News Shipbuilding would begin construction of the USS Enterprise—the world's first nuclear-powered aircraft carrier—and the subsequent atomic carrier fleet.
Virginia also witnessed American efforts in the Space Race. When the National Advisory Committee for Aeronautics was transformed into the National Aeronautics and Space Administration in 1958, the resulting Space Task Group headquartered at the laboratories of Langley Research Center. From there, it would initiate Project Mercury, and would remain the headquarters of the U.S. manned spaceflight program until its transfer to Houston in 1962. On the Eastern Shore, near Chincoteague, Wallops Flight Facility served as a rocket launch site, including the launch of Little Joe 2 on December 4, 1959, which sent a Rhesus monkey, Sam, into suborbital spaceflight. Langley later oversaw the Viking program to Mars.
The new U.S. Interstate highway system begun in the 1950s and the new Hampton Roads Bridge-Tunnel in 1958 helped transform Virginia Beach from a tiny resort town into one of the state's largest cities by 1963, and spurring the growth of the Hampton Roads region linked by the Hampton Roads Beltway. In the western portion of the state, completion of north-south Interstate 81 brought better access and new businesses to dozens of counties over a distance of 300 miles (480 km) as well as facilitating travel by students at the many Shenandoah area colleges and universities. The creation of Smith Mountain Lake, Lake Anna, Claytor Lake, Lake Gaston, and Buggs Island Lake, by damming rivers, attracted many retirees and vacationers to those rural areas. As the century drew to a close, Virginia tobacco growing gradually declined due to health concerns, although not at steeply as in Southern Maryland. A state community college system brought affordable higher education within commuting distance of most Virginians, including those in remote, underserved localities. Other new institutions were founded, most notably George Mason University and Liberty University. Localities such as Danville and Martinsville suffered greatly as their manufacturing industries closed.
Massive resistance and Civil Rights
The state government orchestrated systematic resistance to federal court orders requiring the end of segregation. The state legislature even enacted a package of laws, known as the Stanley plan, to try to evade racial integration in public schools. Prince Edward County even closed all its public schools in an attempt to avoid racial integration, but relented in the face of U.S. Supreme Court rulings. The first black students attended the University of Virginia School of Law in 1950, and Virginia Tech in 1953. In 2008, various actions of the Civil Rights Movement were commemorated by the Virginia Civil Rights Memorial in Richmond.
By the 1980s, Northern Virginia and the Hampton Roads region had achieved the greatest growth and prosperity, chiefly because of employment related to Federal government agencies and defense, as well as an increase in technology in Northern Virginia. Shipping through the Port of Hampton Roads began expansion which continued into the early 21st century as new container facilities were opened. Coal piers in Newport News and Norfolk had recorded major gains in export shipments by August 2008. The recent expansion of government programs in the areas near Washington has profoundly affected the economy of Northern Virginia whose population has experienced large growth and great ethnic/ cultural diversification, exemplified by communities such as Tysons Corner, Reston and dense, urban Arlington. The subsequent growth of defense projects has also generated a local information technology industry. In recent years, intolerably heavy commuter traffic and the urgent need for both road and rail transportation improvements have been a major issue in Northern Virginia. The Hampton Roads region has also experienced much growth, as have the western suburbs of Richmond in both Henrico and Chesterfield Counties.
Virginia served as a major center for information technology during the early days of the Internet and network communication. Internet and other communications companies clustered in the Dulles Corridor. By 1993, the Washington area had the largest amount of Internet backbone and the highest concentration of Internet service providers. In 2000, more than half of all Internet traffic flowed along the Dulles Toll Road. Bill von Meister founded two Virginia companies that played major roles in the commercialization of the Internet: McLean, Virginia based The Source and Control Video Corporation, forerunner of America Online. While short-lived, The Source was one of the first online service providers alongside CompuServe. On hand for the launch of The Source, Isaac Asimov remarked "This is the beginning of the information age." The Source helped pave the way for future online service providers including another Virginia company founded by von Meister, America Online (AOL). AOL became the largest provider of Internet access during the Dial-up era of Internet access. AOL maintained a Virginia headquarters until the then-struggling company moved in 2007.
In 2006 former Governor of Virginia Mark Warner gave a speech and interview in the massively multiplayer online game Second Life, becoming the first politician to appear in a video game. In 2007 Virginia speedily passed the nation's first spaceflight act by a vote of 99–0 in the House of Delegates. Northern Virginia company Space Adventures is currently the only company in the world offering space tourism. In 2008 Virginia became the first state to pass legislation on Internet safety, with mandatory educational courses for 11- to 16-year-olds.
- Colony of Virginia
- History of Richmond, Virginia, the current state capital
- History of the East Coast of the United States
- History of the Southern United States
- Former counties, cities, and towns of Virginia
- List of newspapers in Virginia in the 18th century
- Peter Kolchin, American Slavery, 1619–1877, New York: Hill and Wang, 1993, p. 28
- "Pocahontas Research Project", Petersburg, VA Official Website, 2006, accessed December 29, 2008
- Brown, Hutch (Summer 2000). "Wildland Burning by American Indians in Virginia". Fire Management Today (Washington, DC: U.S. Department of Agriculture, Forest Service) 60 (3): 32. An engraving after John White watercolor. Sparsely wooded field in background suggests the region's savanna.
- Virginia Indian Tribes, University of Richmond
- c.f. Anishinaabe language: danakamigaa: "activity-grounds", i.e. "land of much events [for the People"
- Berrier Jr., Ralph (September 20, 2009). "The slaughter at Saltville". The Roanoke Times. Retrieved October 9, 2011.
- "Virginia Memory: Virginia Chronology". Library of Virginia. Retrieved October 9, 2011.
- “A” New Andalucia and a Way to the Orient: The American Southeast During the Sixteenth Century. LSU Press. 1 October 2004. pp. 182–184. ISBN 978-0-8071-3028-5. Retrieved 30 March 2013.
- Stephen Adams (2001), The best and worst country in the world: perspectives on the early Virginia landscape, University of Virginia Press, p. 61, ISBN 978-0-8139-2038-2
- Jerald T. Milanich (February 10, 2006). Laboring in the Fields of the Lord: Spanish Missions And Southeastern Indians. University Press of Florida. p. 92. ISBN 978-0-8130-2966-5. Retrieved June 30, 2012.
- Seth Mallios (August 28, 2006). The Deadly Politics of Giving: Exchange And Violence at Ajacan, Roanoke, And Jamestown. University of Alabama Press. pp. 39–43. ISBN 978-0-8173-5336-0. Retrieved June 30, 2012.
- Price, 11
- Thomas C. Parramore; Peter C. Stewart; Tommy L. Bogger (April 1, 2000). Norfolk: The First Four Centuries. University of Virginia Press. p. 12. ISBN 978-0-8139-1988-1. Retrieved March 18, 2012.
- MR Peter C Mancall (2007). The Atlantic World and Virginia, 1550-1624. UNC Press Books. p. 517, 522. ISBN 978-0-8078-3159-5. Retrieved 17 February 2013.
- Three names from the Roanoke Colony are still in use, all based on Native American names. Stewart, George (1945). Names on the Land: A Historical Account of Place-Naming in the United States. New York: Random House. p. 22. ISBN 1-59017-273-6.
- Bernard W. Sheehan, Savagism and civility: Indians and Englishmen in Colonial Virginia (1990) p 226
- T. H. Breen, "Looking Out for Number One: Conflicting Cultural Values in Early Seventeenth-Century Virginia," South Atlantic Quarterly, Summer 1979, Vol. 78 Issue 3, pp. 342–360
- J. Frederick Fausz, "The 'Barbarous Massacre' Reconsidered: The Powhatan Uprising of 1622 and the Historians," Explorations in Ethnic Studies, vol 1 (Jan. 1978), 16–36
- Gleach p. 199
- John Esten Cooke, Virginia: A History of the People (1883) p. 205.
- Wilcomb E. Washburn, The Governor and the Rebel: A History of Bacon’s Rebellion in Virginia (1957)
- Morison, Samuel Eliot (1972). The Oxford History of the American People. New York City: Mentor. p. 76. ISBN 0-451-62600-1.
- Edmund Morgan, American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) p 386
- Heinemann, Old Dominion, New Commonwealth (2007) 83–90
- Douglas Southall Freeman, George Washington (1948) 1:79
- Edward L. Bond and Joan R. Gundersen, The Episcopal Church in Virginia, 1607–2007 (2007) ISBN 978-0-945015-28-4
- Philip Alexander, Bruce, Institutional History of Virginia in the Seventeenth Century: An Inquiry into the Religious, Moral, Educational, Legal, Military, and Political Condition of the People, Based on Original and Contemporaneous Records (1910) pp. 55–177
- Rountree p. 161–162, 168–170, 175
- Jacob M. Blosser, "Irreverent Empire: Anglican Inattention in an Atlantic World," Church History, Sept 2008, Vol. 77 Issue 3, pp. 596–628
- Edward L. Bond, "Anglican theology and devotion in James Blair's Virginia, 1685–1743," Virginia Magazine of History and Biography, 1996, Vol. 104 Issue 3, pp. 313–40
- Charles Woodmason, The Carolina Backcountry on the Eve of the Revolution: The Journal and Other Writings of Charles Woodmason, Anglican Itinerant ed. by Richard J. Hooker (1969)
- Janet Moore Lindman, "Acting the Manly Christian: White Evangelical Masculinity in Revolutionary Virginia," William & Mary Quarterly, April 2000, Vol. 57 Issue 2, pp. 393–416
- James H. Smylie, http://pres-outlook.net/reports-a-resources3/presbyterian-heritage-articles3/846.html Retrieved September 18, 2012.
- Presidents of Princeton from princeton.edu. Retrieved September 18, 2012.
- Encyclopedia Virginia http://www.encyclopediavirginia.org/Great_Awakening_in_Virginia_The
- Jewel L. Spangler, Virginians Reborn: Anglican Monopoly, Evangelical Dissent, and the Rise of the Baptists in the Late Eighteenth Century (University Press of Virginia, 2008) ISBN 978-0-8139-2679-7
- Jennifer Oast, "'The Worst Kind of Slavery': Slave-Owning Presbyterian Churches in Prince Edward County, Virginia," Journal of Southern History, Nov 2010, Vol. 76 Issue 4, pp. 867–900
- Spangler, Virginians Reborn: Anglican Monopoly, Evangelical Dissent, and the Rise of the Baptists in the Late Eighteenth Century (2008)
- Richard R. Beeman, "Social Change and Cultural Conflict n Virginia: Lunenburg County, 1746 To 1774," William and Mary Quarterly 1978 35(3): 455–476
- J. Stephen Kroll-Smith, "Transmitting a Revival Culture: The Organizational Dynamic of the Baptist Movement in Colonial Virginia, 1760–1777," Journal of Southern History 1984 50(4): 551–568
- Rhys Isaac, "Evangelical Revolt: The Nature of the Baptists' Challenge to the Traditional Order in Virginia, 1765 To 1775," William and Mary Quarterly 1974 31(3): 345–368
- Cassandra Pybus, "'One Militant Saint': The Much Traveled Life of Mary Perth," Journal of Colonialism & Colonial History, Winter 2008, Vol. 9 Issue 3, p6+
- Peter Kolchin, American Slavery, p. 73
- Richard K. MacMaster, "Liberty or Property? The Methodist Petition for Emancipation in Virginia, 1785," Methodist History, Oct 1971, Vol. 10 Issue 1, pp. 44–55
- John A. Ragosta, "Fighting for Freedom: Virginia Dissenters' Struggle for Religious Liberty during the American Revolution," Virginia Magazine of History and Biography, 2008, Vol. 116 Issue 3, pp. 226–261
- Thomas E. Buckley, Church and State in Revolutionary Virginia, 1776–1787 (1977)
- Pauline Maier, Ratification: The People Debate the Constitution, 1787–1788 (2010) pp. 235–319
- Peter Kolchin, American Slavery: 1619–1877, New York: Hill and Wang, 1994, p. 73
- Kolchin, American Slavery, p. 81
- Andrew Levy, The First Emancipator: The Forgotten Story of Robert Carter, the Founding Father who freed his slaves, New York: Random House, 2005 (ISBN 0-375-50865-1)
- Scott Nesbit, Scales Intimate and Sprawling: Slavery, Emancipation, and the Geography of Marriage in Virginia, Southern Spaces, July 19, 2011. http://southernspaces.org/2011/scales-intimate-and-sprawling-slavery-emancipation-and-geography-marriage-virginia.
- Albert J. Raboteau, Slave Religion: The 'Invisible Institution' in the Antebellum South, New York: Oxford University Press, 2004, p. 137, accessed December 27, 2008
- "Washington Iron Furnace National Register Nomination". Virginia Department of Historic Resources. Retrieved March 23, 2011.
- S. Sydney Bradford, "The Negro Ironworker in Ante Bellum Virginia," Journal of Southern History, May 1959, Vol. 25 Issue 2, pp. 194–206; Ronald L. Lewis, "The Use and Extent of Slave Labor in the Virginia Iron Industry: The Antebellum Era," West Virginia History, Jan 1977, Vol. 38 Issue 2, pp. 141–156
- For a comparison of Virginia and New Jersey see John Bezis-Selfa, "A Tale of Two Ironworks: Slavery, Free Labor, Work, and Resistance in the Early Republic," William & Mary Quarterly, Oct 1999, Vol. 56 Issue 4, pp. 677–700
- Aaron Sheehan-Dean, "Everyman's War: Confederate Enlistment in Civil War Virginia," Civil War History, March 2004, Vol. 50 Issue 1, pp. 5–26
- The U.S Constitution requires permission of the old state for a new state to form. David R. Zimring, "'Secession in Favor of the Constitution': How West Virginia Justified Separate Statehood during the Civil War," West Virginia History, Fall 2009, Vol. 3 Issue 2, pp. 23–51
- In the statewide vote on May 23, 1861 on secession, the 50 counties of the future West Virginia voted 34,677 to 19,121 to remain in the Union. Richard O. Curry, A House Divided, Statehood Politics & the Copperhead Movement in West Virginia, (1964), pp. 141–147.
- Curry, A House Divided, pg. 73.
- Curry, A House Divided, pgs. 141–152.
- Charles H. Ambler and Festus P. Summers, West Virginia: The Mountain State ch 15–20
- Otis K. Rice, West Virginia: A History (1985) ch 12–14
- The main scholarly histories are Hamilton James Eckenrode, The Political History of Virginia during the Reconstruction (1904); Richard Lowe, Republicans and Reconstruction in Virginia, 1856–70 (1991); and Jack P. Maddex, Jr., The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970). See also Heinemann et al., New Commonwealth (2007) ch. 11
- Mary Farmer-Kaiser, Freedwomen and the Freedmen's Bureau: Race, Gender, and Public Policy in the Age of Emancipation, (Fordham U.P., 2010), quotes pp. 51, 13
- Richard Lowe, "Another Look at Reconstruction in Virginia," Civil War History, March 1986, Vol. 32 Issue 1, pp. 56–76
- James L. McDonough, "John Schofield as Military Director of Reconstruction in Virginia.," Civil War History, Sept 1969, Vol. 15#3, pp. 237–256
- Eric Foner, Politics and Ideology in the Age of the Civil War (1980) p 146
- James E. Bond, No Easy Walk to Freedom: Reconstruction and the Ratification of the Fourteenth Amendment (Praeger, 1997) p. 156.
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 5
- The Carpetbaggers were Northern whites who had moved to Virginia after the war. Heinemann et al., New Commonwealth (2007) p. 248
- Note: In order to gain public education, black delegates had to accept segregation in the schools.
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 6
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 7
- Walker had 119,535 votes and Wells 101,204. The new Underwood Constitution was approved overwhelmingly, but the disfranchisement clauses were rejected by 3:2 ratios. The new legislature was controlled by the Conservative Party, which soon absorbed the "True Republicans". Eckenrode, The Political History of Virginia during the Reconstruction, p. 411
- Ku Klux Klan chapters were formed in Virginia in the early years after the war, but they played a negligible role in state politics and soon vanished. Heinemann et al., New Commonwealth (2007) p. 249
- Nelson M. Blake, William Mahone of Virginia: Soldier and Political Insurgent (1935)
- Henry C. Ferrell, Claude A. Swanson of Virginia: a political biography (1985)
- George Harrison Gilliam, "Making Virginia Progressive," Virginia Magazine of History and Biography, 1999, Vol. 107 Issue 2, pp. 189–222
- Lex Renda, "The Advent of Agricultural Progressivism in Virginia," Virginia Magazine of History and Biography, 1988, Vol. 96 Issue 1, pp. 55–82
- Lloyd C. Taylor, Jr. "Lila Meade Valentine: The FFV as Reformer," Virginia Magazine of History and Biography, 1962, Vol. 70 Issue 4, pp. 471–487
- Sara Hunter Graham, "Woman Suffrage In Virginia: The Equal Suffrage League and Pressure-Group Politics, 1909–1920," Virginia Magazine of History and Biography, 1993, Vol. 101 Issue 2, pp. 227–250
- Michael Dennis, "Reforming the 'academical village,'" Virginia Magazine of History and Biography, 1997, Vol. 105 Issue 1, pp. 53–86
- James M. Lindgren, "Virginia Needs Living Heroes": Historic Preservation in the Progressive Era," Public Historian, Jan 1991, Vol. 13 Issue 1, pp. 9–24
- "U-Boat Sinks Schooner Without Any Warning". New York Times. August 17, 1918. Retrieved July 28, 2011.
- "RAIDING U-BOAT SINKS 2 NEUTRALS OFF VIRGINIA COAST". New York Times. June 17, 1918. Retrieved July 28, 2011.
- Arlington Connection, Michael Lee Pope, October 14–20, 2009, Alcohol as Budget Savior", page 3
- Morgan Kousser, The Shaping of Southern Politics (1974) p 181; Wallenstein, Cradle of America (2007) p 283–4
- V.O. Key, Jr., Southern Politics (1949) p 32
- Charles Johnson, "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR
- "A Brief History of U.S. Fleet Forces Command". U.S. Fleet Forces Command, USN. Retrieved March 17, 2011.
- "Langley's Role in Project Mercury". NASA Langley Research Center. Retrieved March 20, 2011.
- "Giant Leaps Began With "Little Joe"". NASA Langley Research Center. Retrieved March 20, 2011.
- "Viking: Trialblazer For All Mars Research". NASA Langley Research Center. Retrieved March 20, 2011.
- Benjamin Muse, Virginia's Massive Resistance (1961)
- Wallenstein, Peter (Fall 1997). "Not Fast, But First: The Desegregation of Virginia Tech". VT Magazine. Virginia Tech. Retrieved 2008-04-12.
- Donnelly, Sally B. "D.C. Dotcom." Time August 8, 2000. http://www.time.com/time/magazine/article/0,9171,52073-2,00.html
- LIFE: Mark Warner becomes first U.S. politician to campaign in a video game
- Virginia leads the way
- Virginia First State to Require Internet Safety Lessons
- Dabney, Virginius. Virginia: The New Dominion (1971)
- Heinemann, Ronald L., John G. Kolp, Anthony S. Parent Jr., and William G. Shade, Old Dominion, New Commonwealth: A History of Virginia, 1607–2007 (2007). ISBN 978-0-8139-2609-4.
- Morse, J. (1797). "Virginia". The American Gazetteer. Boston, Massachusetts: At the presses of S. Hall, and Thomas & Andrews.
- Rubin, Louis D. Virginia: A Bicentennial History. States and the Nation Series. (1977), popular
- Salmon, Emily J., and Edward D.C. Campbell, Jr., eds. The Hornbook of Virginia history: A Ready-Reference Guide to the Old Dominion's People, Places, and Past 4th edition. (1994)
- Wallenstein, Peter. Cradle of America: Four Centuries of Virginia History (2007). ISBN 978-0-7006-1507-0.
- WPA. Virginia: A Guide to the Old Dominion (1940) famous guide to every locality; strong on society, economy and culture online edition
- Younger, Edward, and James Tice Moore, eds. The Governors of Virginia, 1860–1978 (1982)
- Tarter, Brent, "Making History in Virginia," Virginia Magazine of History and Biography Volume: 115. Issue: 1. 2007. pp. 3+. online edition
- Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861 (1910) full text online
- Billings, Warren M., John E. Selby, and Thad W, Tate. Colonial Virginia: A History (1986)
- Bond, Edward L. Damned Souls in the Tobacco Colony: Religion in Seventeenth-Century Virginia (2000),
- Breen T. H. Puritans and Adventurers: Change and Persistence in Early America (1980). 4 chapters on colonial social history online edition
- Breen, T. H. Tobacco Culture: The Mentality of the Great Tidewater Planters on the Eve of Revolution (1985)
- Breen, T. H., and Stephen D. Innes. "Myne Owne Ground": Race and Freedom on Virginia's Eastern Shore, 1640–1676 (1980)
- Brown, Kathleen M. Good Wives, Nasty Wenches, and Anxious Patriarchs: Gender, Race, and Power in Colonial Virginia (1996) excerpt and text search
- Byrd, William. The Secret Diary of William Byrd of Westover, 1709–1712 (1941) ed by Louis B. Wright and Marion Tinling online edition; famous primary source; very candid about his priivate life
- Bruce, Philip Alexander. Institutional History of Virginia in the Seventeenth Century: An Inquiry into the Religious, Moral, Educational, Legal, Military, and Political Condition of the People, Based on Original and Contemporaneous Records (1910) online edition
- Freeman, Douglas Southall; George Washington: A Biography Volume: 1–7. (1948). Pulitzer Prize. vol 1 online
- Gleach; Frederic W. Powhatan's World and Colonial Virginia: A Conflict of Cultures (1997).
- Isaac, Rhys. Landon Carter's Uneasy Kingdom: Revolution and Rebellion on a Virginia Plantation (2004)]
- Isaac, Rhys. The Transformation of Virginia, 1740–1790 (1982, 1999)] Pulitzer Prize winner, dealing with religion and morality online review
- Kolp, John Gilman. Gentlemen and Freeholders: Electoral Politics in Colonial Virginia (Johns Hopkins U.P. 1998)
- Menard, Russell R. "The Tobacco Industry in the Chesapeake Colonies, 1617–1730: An Interpretation." Research In Economic History 1980 5: 109–177. 0363–3268 the standard scholarly study
- Morgan, Edmund S. Virginians at Home: Family Life in the Eighteenth Century (1952). online edition
- Morgan, Edmund S. "Slavery and Freedom: The American Paradox." Journal of American History 1972 59(1): 5–29 in JSTOR
- Morgan, Edmund S. American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) online edition highly influential study
- Nelson, John A Blessed Company: Parishes, Parsons, and Parishioners in Anglican Virginia, 1690–1776 (2001)
- Rasmussen, William M.S. and Robert S. Tilton. Old Virginia: The Pursuit of a Pastoral Ideal (2003)]
- Roeber, A. G. Faithful Magistrates and Republican Lawyers: Creators of Virginia Legal Culture, 1680–1810 (1981)
- Rutman, Darrett B., and Anita H. Rutman. A Place in Time: Middlesex County, Virginia, 1650–1750 (1984), new social history
- Wertenbaker, Thomas J. The Shaping of Colonial Virginia, comprising Patrician and Plebeian in Virginia (1910) full text online; Virginia under the Stuarts (1914) full text online; and The Planters of Colonial Virginia (1922) full text online; well written but outdated
- Wright, Louis B. The First Gentlemen of Virginia: Intellectual Qualities of the Early Colonial Ruling Class (1964)
1776 to 1850
- Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America (2004)
- Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861 (1910) full text online
- Beeman, Richard R. The Old Dominion and the New Nation, 1788–1801 (1972)
- Dill, Alonzo Thomas. "Sectional Conflict in Colonial Virginia," Virginia Magazine of History and Biography 87 (1979): 300–315.
- Lebsock, Suzanne D. A Share of Honor: Virginia Women, 1600–1945 (1984)
- Link, William A. Roots of Secession: Slavery and Politics in Antebellum Virginia (2007) excerpt and text search
- Majewski, John D. A House Dividing: Economic Development in Pennsylvania and Virginia Before the Civil War (2006) excerpt and text search
- Risjord, Norman K. Chesapeake Politics, 1781–1800 (1978). in-depth coverage of Virginia, Maryland and North Carolina online edition
- Selby, John E. The Revolution in Virginia, 1775–1783 (1988)
- Shade, William G. Democratizing the Old Dominion: Virginia and the Second Party System 1824–1861 (1996)
- Tillson, Jr. Albert H. Gentry and Common Folk: Political Culture on a Virginia Frontier, 1740–1789 (1991),
- Varon; Elizabeth R. We Mean to Be Counted: White Women and Politics in Antebellum Virginia (1998)
- Virginia State Dept. of Education. The Road to Independence: Virginia 1763–1783 online edition; 80pp; with student projects
1850 to 1870
- Blair, William. Virginia's Private War: Feeding Body and Soul in the Confederacy, 1861–1865 (1998) online edition
- Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Eckenrode, Hamilton James. The political history of Virginia during the Reconstruction, (1904) online edition
- Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999)
- Lankford, Nelson. Richmond Burning: The Last Days of the Confederate Capital (2002)
- Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984)
- Lowe, Richard. Republicans and Reconstruction in Virginia, 1856–70 (1991)
- Maddex, Jr., Jack P. The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970).
- Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia before the Civil War (2000)
- Noe, Kenneth W. Southwest Virginia's Railroad: Modernization and the Sectional Crisis (1994)
- Robertson, James I. Civil War Virginia: Battleground for a Nation (1993) 197 pages; excerpt and text search
- Shanks, Henry T. The Secession Movement in Virginia, 1847–1861 (1934) online edition
- Sheehan-Dean, Aaron Charles. Why Confederates fought: family and nation in Civil War Virginia (2007) 291 pages excerpt and text search
- Simpson, Craig M. A Good Southerner: The Life of Henry A. Wise of Virginia (1985), wide-ranging political history
- Wallenstein, Peter, and Bertram Wyatt-Brown, eds. Virginia's Civil War (2008) excerpt and text search
- Wills, Brian Steel. The war hits home: the Civil War in southeastern Virginia (2001) 345 pages; excerpt and text search
- Brundage, W. Fitzhugh. Lynching in the New South: Georgia and Virginia, 1880–1930 (1993)
- Buni, Andrew. The Negro in Virginia Politics, 1902–1965 (1967)
- Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)]
- Ferrell, Henry C., Jr. Claude A. Swanson of Virginia: A Political Biography (1985) early 20th century
- Gilliam, George H. "Making Virginia Progressive: Courts and Parties, Railroads and Regulators, 1890–1910." Virginia Magazine of History and Biography 107 (Spring 1999): 189–222.
- Heinemann, Ronald L. Depression and the New Deal in Virginia: The Enduring Dominion (1983)
- Heinemann, Ronald L. Harry Byrd of Virginia (1996)
- Heinemann, Ronald L. "Virginia in the Twentieth Century: Recent Interpretations." Virginia Magazine of History and Biography 94 (April 1986): 131–60.
- Hunter, Robert F. "Virginia and the New Deal," in John Braeman et al. eds. The New Deal: Volume Two – the State and Local Levels (1975) pp. 103–36
- Johnson, Charles. "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR
- Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999)
- Key, V. O., Jr. Southern Politics in State and Nation (1949), important chapter on Virginia in 1940s
- Lassiter, Matthew D., and Andrew B. Lewis, eds. The Moderates’ Dilemma: Massive Resistance to School Desegregation in Virginia (1998)
- Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984)
- Link, William A. A Hard Country and a Lonely Place: Schooling, Society, and Reform in Rural Virginia, 1870–1920 (1986)
- Martin-Perdue, Nancy J., and Charles L. Perdue Jr., eds. Talk about Trouble: A New Deal Portrait of Virginians in the Great Depression (1996)
- Moger, Allen W. Virginia: Bourbonism to Byrd, 1870–1925 (1968)
- Muse, Benjamin. Virginia's Massive Resistance (1961)
- Pulley, Raymond H. Old Virginia Restored: An Interpretation of the Progressive Impulse, 1870–1930 (1968)
- Shiftlett, Crandall. Patronage and Poverty in the Tobacco South: Louisa County, Virginia, 1860–1900 (1982), new social history
- Smith, J. Douglas. Managing White Supremacy: Race, Politics, and Citizenship in Jim Crow Virginia (2002)
- Sweeney, James R. "Rum, Romanism, and Virginia Democrats: The Party Leaders and the Campaign of 1928" Virginia Magazine of History and Biography 90 (October 1982): 403–31.
- Wilkinson, J. Harvie, III. Harry Byrd and the Changing Face of Virginia Politics, 1945–1966 (1968)
- Wynes, Charles E. Race Relations in Virginia, 1870–1902 (1961)
Environment, geography, locales
- Adams, Stephen. The Best and Worst Country in the World: Perspectives on the Early Virginia Landscape (2002) excerpt and text search
- Gottmann, Jean. Virginia at mid-century (1955), by a leading geographer
- Gottmann, Jean. Virginia in Our Century (1969)
- Kirby, Jack Temple. "Virginia'S Environmental History: A Prospectus," Virginia Magazine of History and Biography, 1991, Vol. 99 Issue 4, pp. 449–488
- *Parramore, Thomas C., with Peter C. Stewart and Tommy L. Bogger. Norfolk: The First Four Centuries (1994)
- Terwilliger, Karen. Virginia's Endangered Species (2001), esp. ch 1
- Sawyer, Roy T. America's Wetland: An Environmental and Cultural History of Tidewater Virginia and North Carolina (University of Virginia Press; 2010) 248 pages; traces the human impact on the ecosystem of the Tidewater region.
- Jefferson, Thomas. Notes on the State of Virginia
- Duke, Maurice, and Daniel P. Jordan, eds. A Richmond Reader, 1733–1983 (1983)
- Eisenberg, Ralph. Virginia Votes, 1924–1968 (1971), all statistics
- Encyclopedia Virginia
- Virginia Historical Society short history of state, with teacher guide
- Virginia Memory, digital collections and online classroom of the Library of Virginia
- How Counties Got Started in Virginia
- Union or Secession: Virginians Decide
- Virginia and the Civil War
- Civil War timeline
- Boston Public Library, Map Center. Maps of Virginia, various dates. | http://en.wikipedia.org/wiki/History_of_Virginia | 13 |
56 | Mass and weight are two common often misused and misunderstood terms in mechanics and fluid mechanics.
The fundamental relation between the mass and the weight is defined by Newton's Second Law and can be expressed as
F = m a (1)
F = force (N)
m = mass (kg)
a = acceleration (m/s2)
Mass is a measure of the amount of material in an object, being directly related to the number and type of atoms present in the object. Mass does not change with a body's position, movement or alteration of its shape, unless material is added or removed.
The mass is a fundamental property of an object, a numerical measure of its inertia and a fundamental measure of the amount of matter in the object.
Weight is the gravitational force acting on a body mass. Transforming Newton's Second Law related to the weight as a force due to gravity can be expressed as
W = m g (2)
W = weight (N)
m = mass (kg)
g = acceleration of gravity (m/s2)
The handling of mass and weight depends on the systems of units that is used. The most common systems of units are the
In the SI system the mass unit is the kg and since the weight is a force - the weight unit is the Newton (N). Equation (2) for a body with 1 kg mass can be expressed as:
w = (1 kg) (9.807 m/s2)
= 9.807 (N) (2b)
9.807 m/s2 = standard gravity close to earth in the SI system
As a result:
The British Gravitational System (Imperial System) of units is used by engineers in the English-speaking world with the same relation to the foot - pound - second system as the meter kilogram - force second system (SI) has to the meter - kilogram - second system. For engineers who deals with forces, instead of masses, it's convenient to use a system that has as its base units length, time, and force, instead of length, time and mass.
The three base units in the Imperial system are the foot, the second, and the pound-force.
In the BG system the mass unit is the slug and is defined from the Newton's Second Law (1). The unit of mass, the slug, is derived from the pound-force by defining it as the mass that will accelerate at 1 foot per second per second when a 1 pound-force acts upon it:
1 lb = (1 slug)(1 ft/s2)
In other words, 1 lb (pound) force acting on 1 slug mass will give the mass an acceleration of 1 ft/s2.
The weight of the mass from equation (2) in BG units can be expressed as:
w (lb) = m (slugs) g (ft/s2)
With a standard gravity - g = 32.17405 ft/s2 - the mass of 1 slug weights 32.17405 lbf (pound-force).
In the English Engineering system of units the primary dimensions are are force, mass, length, time and temperature. The units for force and mass are defined independently
In the EE system 1 lb of force will give a mass of 1 lbm a standard acceleration of 32.17405 ft/s2.
Since the EE system operates with these units of force and mass, the Newton's Second Law can be modified to
F = m a / gc (3)
gc = a proportionality constant
or transformed to weight
w = m g / gc (4)
The proportionality constant gc makes it possible to define suitable units for force and mass. We can transform (4) to
1 lbf = (1 lbm)(32.174 ft/s2) / gc
gc = (1 lbm)(32.174 ft/s2)/(1 lbf)
Since 1 lbf gives a mass of 1 lbm an acceleration of 32.17405 ft/s2 and a mass of 1 slug an acceleration of 1 ft/s2, then
1 slug = 32.17405 lbm
A car's mass is 1,644 kg. The weight can be calculated:
w = (1,644 kg)(9.807 m/s2)
= 16122.7 N
= 16.1 kN
- there is a force (weight) of 16.1 kN between the car and the earth. | http://www.engineeringtoolbox.com/mass-weight-d_589.html | 13 |
55 | Instruments aboard NASA's and NOAA's spacecrafts use their vantage point from space to collect global measurements of the ocean's surface temperature. Each day these instruments make thousands of measurements of broad swaths of the Earth - creating concurrent data sets of the entire planet. By developing global, detailed, and decades-long views of Sea Surface Temperature (SST), data obtained from NASA and NOAA satellites provide the basis for the prediction of climate change, ocean currents, and the potent El Niño-La Niña cycles.
El Niño is perhaps the best known example of the impact that changing sea surface temperature has on our climate. Every three to seven years, this warming of surface ocean waters in the eastern tropical Pacific brings winter droughts and deadly forest fires in Central America, Indonesia, Australia, and southeastern Africa, and lashing rainstorms in Ecuador and Peru. El Niño affects thousands of people worldwide, and billions of dollars in economic impact. El Niño's "sister," La Niña, occurs less frequently and has the opposite effect - the cooling of surface ocean waters.
But changing SST patterns have broader implications than just the El Niño and La Niña cycles. Changes in SST are the single most important indicator of climate change. Heat is one of the main drivers of global climate, and the ocean is a huge reservoir of heat. The top 6.5 feet of ocean has the potential to store the equivalent amount of heat contained in the atmosphere. The ocean has a high capacity and as ocean currents move tremendous amounts of water over vast distances, heat is also carried or transferred over these distances. This release of heat can play a major role in climate from the regional/basin to global scale. It is for this reason that oceans are termed the 'memory' of the Earth's climate system. Tracking SST as a variable over long periods of time, as well as operationally, is critical for developing climate models and improved weather forecasts.
Global Sea Surface Temperature. This false-color image shows a one-month composite of MODIS sea surface temperature data for May 2001. Red and yellow around the equatorial region indicates warmer temperatures, green is an intermediate value, while blues and then purples toward the poles are progressively colder values. The image reveals cold water currents that move from Antarctica northward along South America's west coast. These cold, deep waters upwell along an equatorial swath around and to the west of the Galapagos Islands. Also noticeable are warm, wide currents of the Gulf Stream moving up the United States' east coast, carrying Caribbean warmth toward Newfoundland and across the Atlantic toward Western Europe. Additionally, there is a warm tongue of water extending from Africa's east coast to well south of the Cape of Good Hope.
The distribution of temperature at the sea surface tends to be zonal, that is, it is independent of longitude. Uneven heating of the Earth by the Sun causes the warmest water to be near the equator, while the coldest water is near the poles. The deviations from these zonal measurements are small. The anomalies of sea-surface temperature, the deviation from a long term average, are also small, less than 1.5°C/34.7°F except in the equatorial Pacific where the deviations can be 3°C/37.4°F. Large deviations in the Equatorial Pacific are due primarily to the El Niño-La Niña cycle.
Most weather and climate events are the result of sea and atmospheric coupling. Heat energy released from the ocean is the dominant driver of atmospheric circulation and weather patterns. SST influences the rate of energy transfer into the atmosphere, as evaporation increases rapidly with temperature. Knowing the temperature of the ocean surface provides tremendous insight into short and long term weather and climate events.
Taking the Ocean's Temperature
The most commonly used instrument to measure sea-surface temperature from space is the Advanced Very High Resolution Radiometer AVHRR. Since 1999, the Moderate-resolution Imaging Spectroradiometer (MODIS) sensor has been collecting even more detailed measurements of surface temperature. More recently, the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) has been collecting SST that includes areas covered by clouds. A key attribute of the AVHRR data is the length of the time record. An AVHRR sensor has been carried on all polar-orbiting meteorological satellites operated by NOAA since Tiros-N was launched in 1978. High quality measurements of the temperature of the ocean are now available from 1981 to the present. This unique SST data set is now the longest satellite derived oceanographic record, providing a 25-year (and continuing) record of global SST changes. Conversely, MODIS and records are much shorter.
Thermal Infrared Remote Sensing
AVHRR and MODIS instruments use radiometers to measure the amount of thermal infrared radiation given off by the surface of the ocean. Thermal infrared remote sensing is based on the fact that everything above absolute zero (-273°C/459°F) emits radiation in the thermal infrared region of the electromagnetic spectrum. The amount of thermal infrared radiation given off by an object is related to its temperature (dying embers give off less radiation than a hot fire). Thus by measuring the amount of radiation given off by the ocean we can calculate its temperature. With instruments like radiometers, it is possible to get a picture of the thermal environment that we cannot experience with our normal human sensors. The ability to record precise variations in infrared radiation has tremendous application in extending our observation of many types of phenomena where minor temperature variations are significant in understanding our environment.
Moderate-resolution Imaging Spectroradiometer (MODIS)
MODIS is sensitive to five different wavelengths, or "channels," of radiation used for measuring SST. Both night and day, the sensor measures the thermal infrared energy escaping the atmosphere at 12 microns and then compares that measurement to how much energy is escaping at 11 microns, allowing scientists to determine how much the atmosphere modifies the signal so they can "correct" the data to more accurately derive SST. The MODIS sensor, because of the increased number of channels, tells us a great deal about the influence of the atmosphere on measurements of SST. Similar to AVHRR, MODIS also takes daily measurement of the global ocean.
Advanced Microwave Scanning Radiometer for EOS (AMSR-E)
Because AVHRR and MODIS cannot observe the ocean when the atmosphere is cloudy, NASA developed a new sensor, AMSR-E, that is able to observe through the clouds. AMSR-E on the Aqua satellite is a passive microwave radiometer, modified from the Advanced Earth Observing Satellite-II (ADEOS-II). Microwaves are radio waves that are able to pass through clouds. Thus, the AMSR-E instrument can measure radiation from the ocean surface through most types of cloud cover, supplementing infrared based measurements of SST that are restricted to cloud-free areas. However, the resolution of AMSR-E is coarser than the thermal IR sensors. The addition of AMSR-E data will provide a significant improvement in our ability to monitor SST and temperature controlling phenomenon.
Sea Surface Height & Temperature
Sea surface height data can also provide clues to studying the temperature of the ocean. Warm water expands raising the sea surface height. Conversely, cold water contracts lowering the height of the sea surface. Thus, measurements of sea surface height can provide information about the heat content of the ocean. The height can tell us how much heat is stored in the ocean water column below its surface. Learn more about sea surface height.
Interpreting Sea Surface Temperature Measurements
Radiation observed by AVHRR and MODIS is modified by its passage through the atmosphere. The degree to which the signal is modified depends upon the chemistry of the overlying atmosphere. Clouds, haze, dust or smoke can interfere with a space-based remote sensor's ability to accurately measure SST, as can greenhouse gases, like water vapor. These are present in abundance in the tropics and strongly absorb infrared energy and re-radiate it back toward the surface. Scientists have created several algorithms to correct the impact of these variables creating more accurate measurements of SST.
Further, scientists analyze SST data to provide new products that have a wide variety of uses. SST data are also distributed and processed by several organizations. These data sets are then used operationally by sponsoring agency scientists and other organizations.
- The Goddard Earth Sciences Data and Information Services Center (GES DISC) at CaltTech/JPL are the key distribution point for SST data and related data sets from NASA.
- The Goddard Distributed Active Archive Center (GDAAC) is the primary distribution center for MODIS data.
- The Global Hydrology & Climate Center provides browse images and some Level 2 AMSR-E data products
- The GES DISC is the mirror site for the level 3 data sets.
SST data is also combined with other data taken in-situ by ships and bouys. This data helps calibrate the satellite data to create a more accurate measurement of SST.
SST data is used by many different organizations for regional studies, anomaly studies, climate and meteorological studies, and to provide near real - time access to the data. SST data products are also widely used by the fishing industry to track the conditions where fish are most likely to be found.
Long term averages of sea surface temperature are used to calculate the normal seas surface temperature conditions for a specific time of year and location. Deviations from the long-term mean are called anomalies. The long-term means are also used for studying climate change. Other data is made available in time intervals of less than a day - in some instances within a few hours of collection. This type of data is mostly used for detecting specific features in the ocean, such as currents and eddies.
Trends we observe
SST data is used to observe many regional phenomena around the world, including the Chesapeake Bay and the Gulf of Mexico, the Gulf Stream, Kuroshio, the Somali Current, the Brazil Current and the East Australian Current. These currents are associated with sharp changes in SST which can be detected using satellites. Coastal water studies are made off the Hawaiian and Alaskan coasts. Multiple studies are also conducted off the North Atlantic.
Sea surface temperatures in the equatorial Pacific affect precipitation (and therefore plant growth) over much of the North American continent. Warmer-than-normal water in the central and western equatorial Pacific, creates higher precipitation in southern and central North America. Conversely, cold water temperatures in the Pacific lead to a decrease in precipitation over northern North America.
SST may also affect one of the world's key large-scale atmospheric circulations - the circulation that regulates the intensity and breaking of rainfall associated with the South Asian and Australian monsoons.
Projects are underway that combine data from multiple satellite systems to produce a robust set of sea surface data for assimilation into ocean forecasting models of the waters around Europe and also the entire Atlantic Ocean. The Global Ocean Data Assimilation Experiment, GODAE, is assimilating sea-surface temperature data, altimeter data, scatterometer data, and drifter data into coupled ocean/atmosphere numerical models to produce forecasts of ocean currents and temperatures up to 30 days in advance everyehwere in the ocean.
Finally, projects are also being conducted to combine SST data from various sensors to create the highest quality SST. These projects will create a new generation of multi-sensor, high-resolution SST products. An example of such a project is the GODAE High Resolution Sea Surface Temperature Pilot Project (GHRSST-PP).
SST data are important to the development and testing of a new generation of computer models in which the interacting processes of the land, the atmosphere, and the oceans are coupled. The measurements are widely used in the creation of more accurate weather forecasts and increasingly it is seen as a key indicator of climate change. It is anticipated that projects like GHRSST will provide even higher quality data sets for such things as hurricane forecasting.
Projects like GHRSST lay the groundwork for future cooperations, between NASA and NOAA, as well as internationally. Such cooperations will lead to major innovations in how data is distributed in near real-time, searched and stored. Plans include a joint NASA/NOAA effort to provide users with an interface for accessing both near real-time and historical data for climate studies. Future technologies should allow managers, decision makers, and modelers to search and access data in near real time for specified areas of interest. Additionally, the merging of SST data from different sensors will provide high resolution SST data suitable for coastal studies and management. | http://science.nasa.gov/earth-science/oceanography/physical-ocean/temperature/ | 13 |
60 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
A likelihood function arises from a conditional probability distribution considered as a function of its second argument, holding the first fixed. For example, consider a model which gives the probability density function of observable random variable X as a function of a parameter θ. Then for a specific value x of X, the function L(θ | x) = P(X=x | θ) is a likelihood function of θ. Two likelihood functions are equivalent if one is a scalar multiple of the other; according to the likelihood principle, all information from the data relevant to inferences about the value of θ is found in the equivalence class.
- X is the number of successes in twelve independent Bernoulli trials with probability θ of success on each trial, and
- Y is the number of independent Bernoulli trials needed to get three successes, again with probability θ of success on each trial.
Then the observation that X = 3 induces the likelihood function
and the observation that Y = 12 induces the likelihood function
These are equivalent because each is a scalar multiple of the other. The likelihood principle therefore says the inferences drawn about the value of θ should be the same in both cases.
The difference between observing X = 3 and observing Y = 12 is only in the design of the experiment: in one case, one has decided in advance to try twelve times; in the other, to keep trying until three successes are observed. The outcome is the same in both cases. Therefore the likelihood principle is sometimes stated by saying:
- The inference should depend only on the outcome of the experiment, and not on the design of the experiment.
The law of likelihoodEdit
A related concept is the law of likelihood, the notion that the extent to which the evidence supports one parameter value or hypothesis against another is equal to the ratio of their likelihoods. That is,
is the degree to which the observation x supports parameter value or hypothesis a against b. If this ratio is 1, the evidence is indifferent, and if greater or less than 1, the evidence supports a against b or vice versa. The use of Bayes factors can extend this by taking account of the complexity of different hypotheses.
Combining the likelihood principle with the law of likelihood yields the consequence that the parameter value which maximizes the likelihood function is the value which is most strongly supported by the evidence. This is the basis for the widely-used method of maximum likelihood.
Historical remarks Edit
The likelihood principle was first identified by that name in print in 1962 (Barnard et al., Birnbaum, and Savage et al.), but arguments for the same principle, unnamed, and the use of the principle in applications goes back to the works of R.A. Fisher in the 1920s. The law of likelihood was identified by that name by I. Hacking (1965). More recently the likelihood principle as a general principle of inference has been championed by A. W. F. Edwards. The likelihood principle has been applied to the philosophy of science by R. Royall.
Birnbaum proved that the likelihood principle follows from two more primitive and seemingly reasonable principles, the conditionality principle and the sufficiency principle. The conditionality principle says that if an experiment is chosen by a random process independent of the states of nature , then only the experiment actually performed is relevant to inferences about . The sufficiency principle says that if is a sufficient statistic for , and if in two experiments with data and we have , then the evidence about given by the two experiments is the same.
Arguments for and against the likelihood principle Edit
The likelihood principle is not universally accepted. Some widely-used methods of conventional statistics, for example many significance tests, are not consistent with the likelihood principle. Let us briefly consider some of the arguments for and against the likelihood principle.
Experimental design arguments on the likelihood principleEdit
Unrealized events do play a role in some common statistical methods. For example, the result of a significance test depends on the probability of a result as extreme or more extreme than the observation, and that probability may depend on the design of the experiment. Thus, to the extent that such methods are accepted, the likelihood principle is denied.
Some classical significance tests are not based on the likelihood. A commonly cited example is the optimal stopping problem. Suppose I tell you that I tossed a coin 12 times and in the process observed 3 heads. You might make some inference about the probability of heads and whether the coin was fair. Suppose now I tell that I tossed the coin until I observed 3 heads, and I tossed it 12 times. Will you now make some different inference?
The likelihood function is the same in both cases: it is proportional to
According to the likelihood principle, the inference should be the same in either case. Apparently paradoxical results of this kind are considered by some as arguments against the likelihood principle; for others it exemplifies its value and resolves the paradox.
Suppose a number of scientists are assessing the probability of a certain outcome (which we shall call 'success') in experimental trials. Conventional wisdom suggests that if is there is no bias towards success or failure then the success probability would be one half. Adam, a scientist, conducted 12 trials and obtains 3 successes and 9 failures. Then he dropped dead.
Bill, a colleague in the same lab, continued Adam's work and published Adam's results, along with a significance test. He tested the null hypothesis that p, the success probability, is equal to a half, versus p < 0.5. The probability of the observed result that out of 12 trials 3 or something fewer (i.e. more extreme) were successes, if H0 is true, is
which is 299/4096 = 7.3%. Thus the null hypothesis is not rejected at the 5% significance level.
Charlotte, another scientist, reads Bill's paper and writes a letter, saying that it is possible that Adam kept trying until he obtained 3 successes, in which case the probability of needing to conduct 12 or more experiments is given by
which is 134/4096 = 3.27%. Now the result is statistically significant at the 5% level.
To these scientists, whether a result is significant or not seems to depend on the original design of the experiment, not just the likelihood of the outcome.
The voltmeter storyEdit
An argument in favor of the likelihood principle is given by Edwards in his book Likelihood. He cites the following story from J.W. Pratt, slightly condensed here. Note that the likelihood function depends only on what actually happened, and not on what could have happened.
- An engineer draws a random sample of electron tubes and measures their voltage. The measurements range from 75 to 99 volts. They are examined by a statistician who computes the sample mean and a confidence interval for the true mean. Later the statistician discovers that the voltmeter reads only as far as 100, so the population appears to be 'censored'. This necessitates a new analysis, if the statistician is orthodox. Luckily, the engineer has another meter reading to 1000 volts, which he would use if necessary. However, the next day the engineer informs the statistician that this second meter was not working at the time of the measuring. The statistician ascertains that the engineer would not have held up the measurements until the meter was fixed, and informs him that new measurements are required. The engineer is astonished. "Next you'll be asking about my oscilloscope".
One might proceed with this story, and consider the fact that in general the presence could have been different. For instance, high range voltmeters don't get broken at predictable moments in time, but rather at unpredictable moments. So it could have been broken, with some probability. The distribution of the measurements depends on this probability.
This story can be translated to Adam's stopping rule above, as follows. Adam stopped immediately after 3 successes, because his boss Bill had instructed him to do so. Adam did not die. After the publication of the statistical analysis by Bill, Adam discovers that he has missed a second instruction from Bill to conduct 12 trials instead, and that Bill's paper is based on this second instruction. Adam is very glad that he got his 3 successes after exactly 12 trials, and explains to his friend Charlotte that by coincidence he executed the second instruction. Later, he is astonished to hear about Charlotte's letter explaining that now the result is significant.
Bayesian arguments on the likelihood principleEdit
From a Bayesian point of view, the likelihood principle is a direct consequence of Bayes' theorem. An observation A enters the formula,
only through the likelihood function .
In general, observations come into play through the likelihood function, and only through the likelihood function; the information content of the data is entirely expressed by the likelihood function. Furthermore, the likelihood principle implies that any event that did not happen has no effect on an inference, since if an unrealized event does affect an inference then there is some information not contained in the likelihood function. Thus, Bayesians accept the likelihood principle and reject the use of frequentist significance tests. As one leading Bayesian, Harold Jeffreys, described the use of significance tests: "A hypothesis that may be true may be rejected because it has not predicted observable results that have not occurred."
Bayesian analysis is not always consistent with the likelihood principle. Jeffreys suggested in 1961 a non-informative prior distribution based on a density proportional to |I(θ)|−1/2 where I(θ) is the Fisher information matrix; this, known as the Jeffreys prior, can fail the likelihood principle as it may depend on the design of the experiment. More dramatically, the use of the Box-Cox transformation may lead to a prior which is data dependent.
Optional stopping in clinical trialsEdit
The fact that Bayesian and frequentist arguments differ on the subject of optional stopping has a major impact on the way that clinical trial data can be analysed. In frequentist setting there is a major difference between a design which is fixed and one which is sequential, i.e. consisting of a sequence of analyses. Bayesian statistics is inherently sequential and so there is no such distinction.
In a clinical trial it is strictly not valid to conduct an unplanned interim analysis of the data by frequentist methods, whereas this is permissible by Bayesian methods. Similarly, if funding is withdrawn part way through an experiment, and the analyst must work with incomplete data, this is a possible source of bias for classical methods but not for Bayesian methods, which do not depend on the intended design of the experiment. Furthermore, as mentioned above, frequentist analysis is open to unscrupulous manipulation if the experimenter is allowed to choose the stopping point, whereas Bayesian methods are immune to such manipulation.
- G.A. Barnard, G.M. Jenkins, and C.B. Winsten. "Likelihood Inference and Time Series", J. Royal Statistical Society, series A, 125:321-372, 1962.
- Allan Birnbaum. "On the foundations of statistical inference". J. Amer. Statist. Assoc. 57(298):269–326, 1962. (With discussion.)
- Anthony W.F. Edwards. Likelihood. 1st edition 1972 (Cambridge University Press), 2nd edition 1992 (Johns Hopkins University Press).
- Anthony W.F. Edwards. "The history of likelihood". Int. Statist. Rev. 42:9-15, 1974.
- Ronald A. Fisher. "On the Mathematical Foundations of Theoretical Statistics", Phil. Trans. Royal Soc., series A, 222:326, 1922. (On the web at: )
- Ian Hacking. Logic of Statistical Inference. Cambridge University Press, 1965.
- Berger J.O., and Wolpert, R.L, (1988). "The Likelihood Principle". The Institute of Mathematical Statistics, Haywood, CA.
- Harold Jeffreys, The Theory of Probability. The Oxford University Press, 1961.
- Richard M. Royall. Statistical Evidence: A Likelihood Paradigm. London: Chapman & Hall, 1997.
- Leonard J. Savage et al. The Foundations of Statistical Inference. 1962.
- Anthony W.F. Edwards. "Likelihood". http://www.cimat.mx/reportes/enlinea/D-99-10.html
- Jeff Miller. Earliest Known Uses of Some of the Words of Mathematics (L)
- John Aldrich. Likelihood and Probability in R. A. Fisher’s Statistical Methods for Research Workers | http://psychology.wikia.com/wiki/Likelihood_principle?oldid=3424 | 13 |
52 | What Caused the American Civil War
What Caused the American Civil War?
Racism caused the American Civil War, plain and simple: a racism that transcended social culture, geographic section, and political orientation, and became entangled with the creation of the Constitution and the Union.
Columbus discovered the Americas in 1492, and within three years the beginning was made of the harsh oppression which would cause the native races to disappear and bring Africans in chains to America. One hundred and thirty years later, at the time the Jamestown colony took root in Virginia, slavery was the rule in the Americas. Initially, to people its American colonies, the British government sent indentured servants to the New World, then criminals, and finally Africans as slaves. By the middle of the Eighteenth Century, the slave trade had developed into a huge business, profitable to both the indigenous entrepreneurs along the West African coast and the owners of New England ships. Slaves were assembled in Africa through purchase, barter, raiding, kidnapping and warfare, brought to the coast by Africans and sold to African brokers who held them in barracoons until ships arrived to carry them by the Middle Passage to America. The total volume of this trade is unknown, but during its heyday at least seven million enslaved souls reached American shores.
Slave Percentage of Total Population by State, 1790
(See, Clayton E. Cramer, Black Demographic Data, A Sourcebook (1997))
The Fledgling United States, 1787
The map illustrates the military situation as the founders knew it, in 1787, when at Philadelphia they drafted the Constitution. At that time the general government of the “United States” was known as the Continental Congress, a body made up of representatives of the several “States” which could pass no substantial laws governing the whole without unanimous consent. With their “country” surrounded as it was by the Great Powers of Europe, the founders at the time the constitution was written had to be thinking there was going to be heavy military confrontations between their Union and the Great Powers for possession of the resources of the continent. Therefore, the paramount thought in all their minds had to be the concept of unity—the principle of all for one and one for all. This reality explains how two distinctly different societies became so locked together politically that no key but war could separate them.
History records the plain fact that, at the time the Constitution was drafted, the attitude of the people living in the states north of the Mason-Dixon line was steadily coalescing in support of abolition. In 1777, Vermont prohibited slavery through its constitution. In 1783, the supreme courts of Massachusetts and New Hampshire declared slavery volative of their state constitutions. In 1784, the legislatures of Rhode Island and
The Continental Congress was in session in New York at the same time the Constitutional Convention was in session in Philadelphia. Key negotiations occurred between the two bodies which resulted in the formation of the new government.
On the South’s side there could be no common government unless its slave population was counted in the calculation of the number of representatives to be assigned its congressional districts. On the North’s side, there could be no common government unless Free states would always exceed Slave states and thus ultimately control the balance of power.
Both sides understood at the beginning of the new compact that when all the existing territory of the Union was turned into states, there would be more Free states than Slave; but not enough Free states to make up a super majority—the number needed to amend the constitution. As long as that number did not reach three fourths the constitution, permitting slavery plainly by its terms in any state that embraced it, could never be amended. Thus, the constitutional provision of Article I, in conjunction with Article IV, Section 2—“No person held to labor in one State, under the laws thereof, escaping into another shall. . . be discharged from such labor”—created the uneasy alliance that preserved the institution of slavery in the South for another eighty years.
The Continental Congress was handed the proposed constitution that the Philadelphia Convention had drafted: instead of voting it up or down, under the authority of the Articles of Confederation, the Congress chose to send it to the legislatures of the several states as Madison proposed; so that the legislatures might form conventions of the people to vote it up or down. Thus, in political theory, the sovereign power of the people would trump the paper barrier presented by the “perpetuality” of the Articles of Confederation..
Articles of Confederation and Perpetual Union between the States
(Ratified by Unanimous Consent July 9, 1778)
“Whereas the Delegates of the United States of America in Congress assembled agree to certain articles of confederation and perpetual Union between the states to wit:
Article I. The style of this confederacy shall be “The United States of America.”
Article II. Each State retains its sovereignty, freedom and independence. . .
Article III. The said states enter into a firm league of friendship with each other, for their common defense, the security of their liberties, and. . .bind themselves to assist each other against all force made upon them. . .
Article IV. . . . the free inhabitants of each of these states shall be entitled to all privileges and immunities of free citizens in the several states. . .
Article V. Each state shall maintain its own delegates to the Congress. . . In determining questions in the United States, in Congress assembled, each state shall have one vote.
Article XIII. The Articles of Confederation shall be inviolably observed by every state, and the Union shall be perpetual; nor shall any alternation at any time hereafter be made in any of them; unless such alternation be agreed to in a congress of the united states, and be afterwards confirmed by the legislatures of every state. . . and we do further solemnly plight and engage the faith of our respective constituents, that they shall abide by the determination of the united states in congress assembled, on all questions. . .and that the articles shall be inviolably observed by the states and that the union shall be perpetual.”
The Proposed Constitution
“Article VII. The ratification of the conventions of nine states, shall be sufficient for the establishment of this constitution between the states so ratifying the same.”
The government of the United States to be spontaneously reconstituted upon the vote of nine states: so much for the “inviolability” of words. Here is the explicit semantic seed of civil war and the implicit manifestation of the ultimate axiom of political science. Despite the solemn pledge, four times repeated, that the Union was to be “perpetual” as framed by the Articles, which required unanimous consent of the states to be changed, the states of the perpetual Union, not consenting, would suddenly be out into the cold.
It’s not surprisingly, then, that soon after the Constitution became the supreme law of the land, there emerged an irrepressible struggle between the two sections for political supremacy: one side pressing for the restriction of slavery, the other side pressing for its expansion. As the threat of war with the Great Powers faded, the North shrugged off its constitutional commitments about slavery and began sticking knives into the South. The South, too entwined with an alien population of Africans to get rid of it, was left with no rational choice, but to seize upon the example set by the founders and declare, by the sovereign power of its people, independence from the North.
When the Constitution became operative, in 1789, the United States was composed of six slave states: Virginia, Delaware, Maryland, North Carolina, South Carolina and Georgia; and seven essentially Free states—Massachusetts Bay, New Hampshire, Rhode Island and Providence Plantation, Connecticut, New York, New Jersey and Pennsylvania. Between 1789 and 1819, operating on the basis of equal division, the Congress admitted into the Union five Free states—Vermont, Ohio, Indiana, Illinois and Maine—and five Slave states: Kentucky, Tennessee, Louisiana, Mississippi and Alabama.
In 1804, the United States through a treaty with France received possession of the territory of the Spanish Empire, extending from the charter limits of Virginia, the Carolinas, and Georgia, and ending at the line of the Sabine River in Arkansas. In 1819, under a treaty with Spain, the U.S. acquired the territory of Florida.
The acquisition of this new territory put considerable political stress on the principle of division that was inherent in the compact shaped by the Constitution. In 1818, the free inhabitants of that part of the Louisiana Territory known as Missouri established a provisional government and petitioned Congress for admission into the Union as a state.
The Missouri Compromise
In the House of Representatives, Tallmadge of New York moved that Missouri be admitted upon condition that all children of slaves born after the date of admission be deemed free when they reached the age of twenty-five, and that the introduction of slaves into the state after its admission be prohibited. Tallmadge’s amendment passed the House, but was stricken by the Senate, sending the bill back to the House. The House refused to pass the bill without the amendment.
When the debate continued into the session of 1819, Henry Clay, then a member of the House, urged the admission of Missouri without the amendment, on the ground that, under Article 4, Section 4 of the Constitution, which provides that “The United States shall guarantee to every State a Republican form of government,” Missouri was entitled to decide for itself whether its laws should recognize a right of property in persons. On this basis, the House again passed the admissions bill and sent it to the Senate.
In the Senate, the argument arose that, under its power to “make all needful rules and regulations for the Territory of the United States,” (Article 4. Section 3) Congress had authority to prohibit slavery, and the prohibition should be imposed for all territory above Missouri’s southern border—the so-called 36-30 line.
In June, 1820, as the debate over admission continued in Congress, Missouri ratified a constitution that contained a provision excluding free Negroes from residence. A majority of congressmen then voted against admission, on the ground that free Negroes were citizens of the states in which they resided and, hence, citizens of the United States, entitled to all the privileges and immunities of same, which included the right to travel anywhere in the United States.
The outcome of the debate in the Senate was the passage of a resolution accepting Missouri into the Union, under the constitution prohibiting the residence of free Negroes, but with the condition that slavery would henceforth be prohibited in the remaining territory above the 36-30 line. After more furious debate in the House, the bill of admission passed the Congress, with the proviso that Missouri promise not to enforce its “no free Negroes” provision. Missouri agreed to this and thus became a state.
Under the 36-30 rule, between 1820 and 1837, the Free states of Maine and Michigan, and the Slave states of Missouri, Arkansas, and Florida, were admitted into the Union.
In 1845, the Republic of Texas was admitted into the Union. There the matter in dispute rested until the war with Mexico, in 1846-47, added the Spanish Crown’s old Southwestern lands west of the Sabine River to the Territory of the United States. After this war, two Free states were admitted: Iowa and Wisconsin. The Free and Slave states were now evenly balanced at fifteen a piece.
In August 1848, a bill for organizing the Oregon territory into a state was introduced in the House of Representatives. Now began the political struggle in earnest, which led directly to the collapse of the Whig party and the emergence of the Republican Party, the election of Abraham Lincoln and the descent of the people of the United States into civil war.
Consistent with the principle of the 36-30 rule, the Oregon Admission Bill was passed by the House with a general slavery restriction in it and sent to the Senate. In the Senate, Illinois Senator Stephen Douglas moved to strike the restriction and insert in its place the provision that the 36-30 line be extended to the Pacific Ocean. The Senate adopted the amendment and the bill returned to the House. Quickly, a majority of representatives voted to reject the bill, for it was plain to see that, if the 36-30 was so extended, the territories of Southern California, Nevada, Utah, Arizona, and New Mexico, forcibly taken from Mexico in 1847, would be open to the introduction of slavery.
With the weight of congressional representation by now firmly grounded in the general population of the Free states, the political fact was plain that the votes of the Free states controlled the balance of power in Congress and they would use that power to prevent the admission of new slave states. Even so, in the Senate, the votes showed that some senators were more interested in the economic profits flowing from the admission of states than in preventing the introduction of slavery.
In the Senate, at the beginning of the Oregon debate, it appeared that sixteen states were in favor of extending the 36-30 line. Two of these states were Pennsylvania and Indiana. Nine states, all Northern, were against it, and three states—New York, Michigan, and Illinois, were divided. On the final vote, the vote was 14 Free states to remove the Douglas amendment and 13 Slave states to keep it. Missouri’s vote was divided, Senator Thomas H. Benton voting with the Free states. The senators from Iowa and Florida did not vote. In the House of Representatives seventy-eight of the eighty-eight votes for the amendment were from Slave states and four from Free states. 121 votes were cast against it: only one of these votes was cast by a representative of a Slave state.
When the Congress convened in 1849, there was great excitement throughout the land. The congressional votes over the Oregon Bill had shown that the Free states were no longer willing to honor the principle of equal division which had originally underpinned the consensus of the Philadelphia Convention. As a consequence of this changing attitude, the Whig Party would disintegrate, the Republican Party would be born, and the Democratic Party would split into conservative and radical factions, with the radicals eventually coalescing with the new Republicans.
In the summer of 1849, President Taylor manipulated events in California which resulted in a setting up of a convention, the framing of a constitution, and a petition arriving at Congress seeking admission as a state.
In January 1850, the Democrats controlled the Senate but the House was deadlocked: 111 Democrats, 105 Whigs, and 13 Freesoilers.
Henry Clay now appeared in the Senate as senator from Kentucky. When he took his seat in the tiny Senate chamber, John C. Calhoun and Daniel Webster —both old men now—were still there. Among the younger men there was Stephen Douglas, now the recognized leader of the Democratic Party, Jefferson Davis of Mississippi, Salmon Chase of Ohio, the founder of the Republican Party, William Seward of New York. And Fillmore, as Vice President, occupied the chair.
When the 1850 session opened, Thomas H. Benton of Missouri introduced a bill to reduce the size of Texas. Other senators introduced bills to spilt Texas into more than one state. Still others proposed territorial governments for California, New Mexico, and Utah.
Now began an intensity of rhetoric that rose and rose in shrill noise and anger until the collapse of the Union in 1860. It began with Henry Clay gaining the Senate floor and, holding it for two days, arguing for a series of resolutions. Clay proposed that the matter of Texas be postponed, that California be admitted, that the territorial governments for Utah and New Mexico be organized without the slavery restriction, and that the domestic slave trade existing in the District of Columbia be abolished.
At this time, Douglas was chairman of the Committee on Territories in the Senate and McClernand was chairman of the committee in the House. Alexander Stephens and Robert Toombs of Georgia controlled the Southern Whigs in the House and they persuaded Douglas to compromise between the two sides: in exchange for the admission of California as a Free state, additional states to be formed from the remaining territory could determine for themselves whether to recognize or reject slavery.
No doubt motivated by his political ambitions, Douglas agreed to Stephens's plan and both Douglas and McClernand introduced bills in their respective chambers to that effect. At the same time, President Taylor sent California's petition for admission to the Congress for ratification.
Compromise of 1850
At the time these issues came to a head, in March of 1850, the senators were at their seats, with the galleries and privileged seats and places on the floor filled with ladies, officers of the government and members of the House and other visitors. Everyone present knew that when California came in the Union a Free state, the principle of equal division of territory between the Free and Slave states would be lost forever, and the balance of power in favor of the Free states, as it had in the House, would shift in the Senate.
In the course of the session, Seward of New York and Davis of Mississippi, friends outside the Senate, stood behind their wooden desks, gesticulating and hurling invectives at each other. Davis proclaimed that the Slave states would never take less that the 36/30 line extended to the Pacific with the right to hold slaves in California below the line.
Benton of Missouri cut in before Seward could respond, to say no earthly power existed which could compel the majority to vote for slavery in California. In the flaring of temper, Foote was seen to draw a pistol from his coat and point it at Benton, when, suddenly, the appearance of the gaunt form of John C. Calhoun hushed the clamoring throng.
Calhoun leaned heavily on his cane as he slowly swayed down the center aisle of the chamber. The contending senators stepped aside into the rows of desks to make way for him to pass. Calhoun's face was deeply tanned, but his cheeks were sunken and his body seemedswallowed in the great cloak he wore.
Clay, Webster, Davis, Douglas and others crowded around him, escorting him to his place among the desk rows. When he reached his old seat, Calhoun gathered the folds of his long cloak in his hands and feebly sat down in his chair. There was a general scurrying among the people in the chamber as they found their places and Vice President Fillmore recognized the senior senator from South Carolina.
Calhoun rose slowly to his full height to say in anticlimax that Mason from Virginia would read his speech, and he sat back down.
In a matter of days Calhoun would be dead. Calhoun's speech was cold and blunt. He had no illusions about the nature of the Union. He knew that the incredible acquisition by the United States of territory, which stretched from Oregon to the Gulf of Mexico, would cause the unraveling of the ropes that held the country together.
How to preserve the Union? Not by Clay's plan, Calhoun contended, for it ignored the root of the issue: the Union could not be preserved by admitting California as a Free state. It could only be done by the North conceding to the South an equal right to the acquired territory, to enforce the fugitive slave provision in the Constitution, stop the antislavery agitation in the halls of Congress, in the pulpits and the press, and amend the Constitution to expressly recognize the right of property in man.
Interrupting Mason’s reading of his speech, Calhoun raised himself from his seat and asked his supporters to show their hands. Hands tentatively appeared one by one above the heads of some of the spectators in the galleries and the senators on the floor. As Calhoun scanned the faces of his fellow senators, Mason continued with the speech, saying that, if the North would not do these things, the States should part in peace. And, if the Free states were unwilling to let the South go in peace, "Tell us so,” Calhoun said, “and we shall know what to do when you reduce the question to submission or resistance."
At this statement, the chamber became quiet as a church. Daniel Webster leaned forward in his chair, staring gloomily into space; Thomas Benton on the back bench sat rigid like a slab of granite; Henry Clay sat with his hands shielding his face. In the minds of each of the politicians came a quick black image of cities in smoking ruins. And everywhere in the little chamber was felt the veiled touch of dreadful black ghosts wandering.
On March 7th, 1850, Daniel Webster took the Senate floor and responded to Calhoun's speech, his piercing black eyes flashing. Webster was dressed in tight vanilla breeches, with a blue cloth coat cut squarely at the waist, and adorned with brass buttons, his neck encasedin a high soft collar surrounded by black stock.
Webster flatly rejected the idea of separation of the States as a physical impossibility. It is impossible, he said, for the simple reason that the Mississippi cannot be cut in two, the North to control its headwaters and the South its mouth. How could the North's commerce flow uninterrupted from the Ohio and Mississippi valleys to the Caribbean? What would become of the border states as they are pulled north and south? What would become of commerce between the West and the East?
Then the Senator from Massachusetts suggested to the Senate the one politically honest solution which might have redeemed the tyranny of the people of the Free states, in bottling up the African Negroes in the old states of the South.
Return to Virginia, Webster proposed, and through her to the whole South, the two hundred millions of dollars the National government obtained from the sale of the old Northwest Territory she ceded to the United States—in exchange for the abolition of slavery in the South.
Here was a solution to the problem of maintaining the South’s economic integrity—a solution which recognized that the existence of slavery was a national, not sectional, responsibility; a solution which shared the burden the abolition of slavery entailed. But to adopt it, there must have been included the recognition that the Africans were now “citizens of the United States,” with all the privileges and immunities that term entails—the right to travel, the right to litigate in the courts, and the right to vote. This the Northern senators were not then prepared to allow. It would mean living with the Africans on a basis of equality.
Once freed, where were the Africans to go? How were they to earn their living? What was to be their new place in society? Where? And what was to be the conditions of the society in which they might find their place?
The then existing social caste of the African was founded in a deep-rooted prejudice in Northern public opinion as well as the South. Before the Revolution, it was not southern planters who brought the Africans in chains to America's shore. It was New England vessels, owned by New England businessmen, manned by New England citizens, which traversed the Atlantic Ocean a thousand times to bring black cargo wailing into the ports of Norfolk, Charleston and Savannah. In 1850, the laws of many of the Free states did not recognize free Africans as citizens. They could own certain property and they were required to pay taxes but they could neither vote nor serve on juries, and their children were forced to attend segregated public schools.
Just the year before, for example, in 1849, a little five year old colored girl, Sara Roberts, had sued the Boston School District, seeking the right to attend the school closest to her home, instead of the colored school way across town. Though colored children, the Supreme Court ruled, have a right to public education, the right was limited to a separate education. (See Roberts v. Boston (1849) 59 Mass. 198) New England had no slaves, it is true, but still a majority of its citizens didn’t want to live with Negroes.
Thus, even if the Government of the United States could have found the means somehow, to compensate the slaveowners for the taking of their property—Alexander Stephens thought compensation was worth two billion—and though the former slaves might live peacefully with their former owners, it could not be done on the basis of equality under law, and certainly not on the basis of citizenship. Emancipation would bring the Africans the freedom to perform work for some form of wages, but for a long time to come, in the eyes of most whites in the North as well as South, they would be a degraded and despised people not fit to socialize with.
Distribution of Slave Population in 1860
The Death of John C. Calhoun
John C. Calhoun died on March 31, 1850. The funeral ceremonies were conducted in the Senate chamber. President Taylor, Vice-president Fillmore, and Cobb, the Speaker of the House, attended with the members of the Supreme Court. The diplomatic corps was also present, standing with the other dignitaries in the well in front of the screaming eagle perched above the Senate President's chair. Daniel Webster and Henry Clay walked at the head of the simple metal casket as the pallbearers brought it down the center aisle past the rest of the senators standing by their desks.
The senators and dignitaries closed in around the pallbearers as they set Calhoun's casket down. In his eulogy, Webster said there was nothing mean or low about the man who had spent his life in the service of the National government, first as senator, then secretary of state, then vice president and finally as senator again. In fulfilling his public duties, Webster said, Calhoun was perfectly patriotic and honest.
. When the ceremony ended, the casket of South Carolina's greatest son was transported by caisson through the streets of Washington to the Navy landing and taken by vessel down the Chesapeake, past the Capes into the ocean and then to Charleston harbor where it was brought ashore and laid to rest in the quiet little churchyard of St. Phillips Church. Today, a hard-faced statute of Calhoun stands in a small, bare park in Charleston through which African Americans daily stroll.
Thomas Hart Benton was the next oldest member of the Senate behind Webster. He had spent thirty years in the Senate, voting always against measures which favored the slave interest. To convey his disdain for Calhoun's political views, Benton had turned his back as Webster spoke. Benton thought Calhoun's ideas were treason.
Benton was wrong about Calhoun. In Calhoun's view, allegiance to the sovereign meant faithful service to one's native state, the minority social group of which each American citizen was then a constituent member, and not faithful service to the Federal government.
The constitutional function of the federal government, Calhoun thought, was to administer the external affairs of the aggregate of the group. His view was consistent with the view of the Old states, whose political leaders designed the original Union. The delegates to the Constitutional Convention which framed the constitution, in 1787, were elected by the state legislatures. But the instrument, when it came from their hands, was nothing but a mere proposal. It carried no obligation. The people of each state acted upon it by assembling through their delegates in separate conventions held in each state. Thus, the government of the United States ultimately derives its whole authority from these state conventions. Sovereignty, whether the Federal Government likes it or nor, resides in the people.
By accepting the stipulation that the assent of the people of merely nine states was sufficient to make the constitution operative, the delegates to the Constitutional Convention and the delegates to the United States Congress expressly adopted the political principle that the people of the states, in a combination which amounted to less than the whole people of the United States, were naturally free to leave the "perpetual union" of the United States and among themselves, "form a more perfect union."
The only constraint on the power of the people of the seceding states to disengage from the perpetual union defined by the Articles of Confederation was the power of the people of the States remaining loyal to the original Union to resist disengagement. The nation styled the United States of America, therefore, was certainly not one Nation indivisible, with liberty and justice for all: it was a combination of divergent political societies, motivated by self-interest to unite together against the world.
When the people of the Old states first formed a Union between themselves under the Articles of Confederation, Virginia held title from the English crown to the territory north and west of the Ohio River extending to the Mississippi valley and the Great Lakes. Virginia could have remained aloof from the original Union and adopted the policy of concentrating a population sympathetic to its culture in the area of what is now Ohio and Michigan. Such a policy would have blocked New England from expanding the influence of its culture westward.
In such circumstance, if New England did not attempt by force of arms to wrest the Northwest Territory away from Virginia, Virginia and its allies might eventually have gained possession of all the territory between the Mississippi and the Pacific. Just look at the map!
Virginia certainly possessed the men, materiel and the allies necessary to enforce a policy of unilateral expansion into the western territories. Instead, Virginia not only joined the original Union but assented to the adoption of a more perfect union transferring in the process title to its Northwest Territory to the United States—with the stipulation that slavery be prohibited there. Truly Virginia, the mother of states, stands at the head of the first flight.
Virginia's voluntary transfer to the United States of its title to the Northwest Territory radically changed the strategic situation for New England. Instead of being bottled up on the northeast seaboard of the continent, the people of New England could peaceably migrate west and north of the river Ohio and take their culture with them. It can hardly be imagined, under such circumstance, that New England could have reasonably believed that Virginia and her allies would not likewise expect to migrate with their culture west and south of the river.
The principle of division of the Territory of the United States between two fundamentally divergent forms of Republican government, therefore, must have been understood by the whole people of the United States to be the bedrock upon which the political stability of the Union depended. If the representatives in Congress of a majority of the people of the United States were to discard it, without reference to the powers granted them by the Constitution, they could expect the people of the affected States to judge for themselves whether the usurpation justified their secession.
After Calhoun's death, the Congress returned to the debate regarding the admission of California in the Union as a Free State. As a consequence of the debate that had been waged between Webster, Clay, Benton and Davis, in the early months of the 1850 Senate session, the bills and amendments the senators had suggested were sent to a joint committee on the territories.
The Southern Whigs, led by Alexander Stephens and Robert Toombs, who were in the House at that time, wanted the Congress to agree that, in organizing all other territorial governments formed from the newly acquired Spanish territories, the settlers should be left alone to introduce slaves or not, and to frame their constitution as they might please. Stephen Douglas, as chairman of the committee on territories in the Senate, agreed with the Southern Whigs' plan and introduced a bill in the Senate in March 1850. Then Henry Clay was made chairman of a committee to review the series of resolutions he had offered on the Senate floor.
On May 8, 1850, Clay reported an "omnibus" bill which provided for restrictions on the introduction of slavery into the New Mexico territory. Jefferson Davis countered with an amendment which would allow slavery in Utah and Douglas moved to strike previousamendments to his bill. In the House, the majority rejected several amendments which would have allowed slavery to exist in the western territories.
In the Senate, in June 1850, Webster spoke in favor of Douglas's latest proposed amendment, which would leave the territorial governments free to decide the slavery issue for themselves. This meant that all territorial governments formed after the admission of California in the Union would not be subject to a slavery restriction.
At this point in the debate, President Taylor died, Fillmore became president and Webster left the Senate to join the Cabinet. After these events, in August 1850, with the admission of California and the organization of the territories of Utah and New Mexico, the Congress adopted the policy of leaving the issue of slavery to the territorial legislature to decide. At this, a country lawyer in Illinois, Abraham Lincoln, perked up.
In December 1852, a bill was introduced in the House to organize Nebraska territory. This territory was part of the territory obtained by France from Spain and ceded to the United States, in 1804. The bill passed and went to the Senate, in March 1853, but was voted down. The bill as it passed the House provided for the organization of a territory bounded by the 45th parallel on the north, Missouri and Iowa on the east, in the south by the 36th parallel and on the west by the Rocky Mountains.
The issue of organizing the Nebraska territory had come up in the Senate before 1853, but the Slave states rejected the organization bills because they did not want to open the territory to settlement under the restriction imposed by the Missouri Compromise. In addition, by keeping settlers out of Nebraska, the proposed transcontinental railroad could not be built from either Chicago or St. Louis, leaving open the possibility that the railroad would pass through Texas
Later, in December 1853, Dodge of Iowa introduced an organization bill for Nebraska. This bill was referred to the Committee on the Territories which was chaired by Stephen A. Douglas. In January 1854, Douglas reported favorably on Dodge's bill, but an amendment was attached to the bill which declared that, in accordance with the principles adopted in 1850, all questions relating to slavery should be left to the decision of the people who occupied the territory.
When southern senators indicated that they would introduce an amendment expressly repealing the Missouri Compromise, Douglas withdrew the Committee's report and presented it again with two amendments: one provided for two territories to be named Nebraska and Kansas and the other asserted that the Missouri Compromise had been superseded by the Compromise of 1850.
On March 3, 1854, the Senate passed Dodge's bill as reported by Douglas by a vote of 37 to 14. Slave state senators voted 23 in favor. Free state senators voted 14 to 12 in favor. The vote made plain that the Free state senators cared more about opening the Indian territory for construction of a railroad to the Pacific than they did about restricting slavery.
The House, with the general population of the Nation having shifted in favor of the restriction of slavery, experienced a bitter fight over the issue of the express repeal of the Missouri Compromise. On May 22, 1854, the Dodge bill passed the House by a vote of 113 to 100. As soon as the bill became law, the people of the border states began agitating for the opening of the Indian territory south of Kansas and west of Arkansas in order to open trade routes to Texas, New Mexico and California.
Meanwhile, Charles Sumner, freshman senator from Massachusetts, joined by Salmon Chase of Ohio, published a paper, which appealed to disaffected Whigs and Democrats to oppose the "monstrous plot" of the slave power to spread slavery further into the territories.
When Douglas introduced the revised bill for debate in the Senate, he had charged that Sumner and Chase were confederates in a conspiracy to force the abolition of slavery. The two senators, Douglas had bellowed, were the agents of "Niggerism in the Congress of the United States." Interrupting Douglas, Sumner snapped back that the policy behind the effort to repeal the Missouri Compromise, which the amended bill expressly codified, was a "soulless, eyeless monster—horrid, unshapen and vast."
For a month, Douglas, Butler, Mason and Sumner and Chase wrangled over the issue. Douglas saw the opponents to the bill as strutting down the path of abolition, "in Indian file, each treading close upon the heels of the other" avoiding ground "which did not bear the foot-print of the Abolition champion." Deep into the debate, Sumner finally gained the floor and declared that the slave power was reneging on a solemn covenant of peace after the free power had performed its side of the bargain; that it was destroying, with Douglas's revised bill, a "Landmark of Freedom." Immediately when Sumner finished his speech and sat down, Douglas took the floor and challenged Sumner's assertion that the Missouri Compromise was sacred. If one congressional act touching slavery was to be considered sacred, why not another like the Fugitive Slave Act which increasingly the Free States were repudiating. When the vituperative debate between the two antagonists finally ended, in May 1854, the bill easily passed the Senate. In the House, the debate lasted two weeks, the bill passing by a 13 vote majority. The repeal of the Missouri Compromise was history. The power of patronage proved greater than the power of principle.
Immediately after the passage of the Kansas-Nebraska organization bill, a petition signed by 3,000 Massachusetts citizens asking for the repeal of the 1850 Fugitive Slave Law was received in the Senate. Since 1850, every Free State had experienced great excitement over a "fugitive slave case." In Racine, Wisconsin, for instance, in March 1854, an African named Joshua Glover was arrested on a warrant issued by a United States District Court judge under the Fugitive Slave Law. Glover was accused of being a runaway slave from Missouri. Two United States marshals, with four other men, broke into Glover's house, arrested Glover and transported him to Milwaukee where he was placed in jail. The next morning, news of Glover's arrest by the marshals spread across Wisconsin. Soon a mob gathered in front of the jail. As the crowd in the courthouse square increased to five thousand, speakers denounced Glover's arrest and demanded the repeal of the slave catching law. Soon the temper of the mob became volatile and men gathered in a knot in front of the jailhouse door and battered it down, freeing Glover who the crowd then lifted bodily over their heads and carried away through the streets, shouting, "No slave hunters in Wisconsin," Glover escaped across Lake Michigan to Canada in a schooner.In Boston, the very day that Charles Sumner rose in the Senate to speak in support of the Massachusetts petition to repeal the Fugitive Slave law, Faneuil Hall was filled with citizens protesting the arrest by United States marshals of an African named Anthony Burns. Speakers soon incited the crowd to action and the citizens streamed out of Faneuil Hall and through the streets and attempted to storm the courthouse where Burns was being held. In the melee that followed one of the police officers guarding Burns was killed. President Pierce immediately ordered Federal troops to Boston and they took Burns into their custody and returned him to his master in Virginia.
In his speech, in support of the Massachusetts petition, Sumner told the Senate that the repeal of the Missouri Compromise annulled all past compromises with the slave power and made future compromise impossible. No more would the Free States tolerate the "disgusting rites" by which the slave hunters sent their dogs, with savage jaws, howling into Massachusetts after men escaping from bondage, Sumner said.
In the course of the uproar that followed Sumner's vehement words, Senator Butler of South Carolina gained the floor and demanded that the Free State senators say whether South Carolina could expect the return of runaway slaves if the Fugitive Slave Law was repealed.
Charles Sumner sat in the desk row in front of Butler's and when Butler spoke, Sumner jerked his chair back from his desk and stood up and faced him. Speaking over Butler's head to the spectators crowded together behind him in the vestibule space and the public gallery at the back of the senate chamber, Sumner shouted out,"
Is thy servant a dog, that he should do this thing?"
Butler's face flushed and he stumbled slightly as he took a step backward.
"Dogs? Dogs?," Butler cried.
Behind Butler, Mason of Virginia leaped to his feet and, stabbing his index finger toward the domed ceiling of the Senate chamber, he hissed at Sumner, "Black Republican, you dare to tell us there are dogs in the Constitution." Other senators shouted out that Sumner should be expelled for dishonoring his solemn oath to support the Constitution which provided that a "person held to service" in a Slave State escaping to a Free State "shall be delivered up" on demand of his master.
As the verbal storm swirled around him, Sumner braced himself against his chair. He stood tight-fisted and scanned the hot, red faces around him with black burning eyes.
"How many are there here," he shouted, "who will stoop with Butler and Mason to be a slave hunter? Who is here who will hunt the bondmen down, flying from Carolina's hateful hell? "
Calls of "censure, Censure," rang out from senators seated on both sides of the aisle, but no one directly answered Sumner's challenge. Sweeping his arm in an arch around the Senate chamber, Sumner continued,
"No Sir. No Sir, I do not believe there are any dogs, however keen their scent or savage their jaws, that can bind me to return your fugitive slaves."
Senator Cass of Michigan rose to remonstrate with Sumner, labeling his outburst "the most un-American and un-patriotic that ever grated on the ears." Douglas of Illinois joined Cass to charge Sumner with uttering obscenities which should be suppressed as "unfit for decent young men to read." Mason chimed in with the rebuke that Sumner's language reeked of "vice in its most odious form."
In rebuttal, Sumner attacked Douglas directly, saying, "No person with the upright form of man can be allowed—" Sumner's voice broke off.
Douglas leaped back to his feet in a rage. "Say it," Douglas shouted.
"I will say it," Sumner retorted; "No person with the upright form of man can be allowed, without violation of all decency, to switch out from his tongue the perpetual stench of offensive personality. . . The nameless animal to which I now refer, is not the proper model for an American senator. Will the Senator from Illinois take notice?"
"I will not imitate you," Douglas shouted back.
Sumner would not stop. "Again the Senator has switched his tongue, and again he fills the Senate with its offensive odor."
When the newspapers reported Sumner's harangue, the public response from the North was highly favorable toward Sumner. The residents of Washington, generally pro-slavery in sympathy, discussed his speech on the street corners, expressing the view that somebody ought to kick the Massachusetts senator down a flight of stairs.
During the next two years, the issue of the settlement of Kansas and the recognition of a territorial government constantly occupied the attention of the Congress. The radical Democrats and Whigs, now transformed into new Republicans, actively supported the migration of people from the Free States to Kansas territory while the slave power in the Democratic Party supported the immigration of Southerners. The Pierce administration appointed a Southerner to act as Territorial Governor and he quickly held elections for a territorial legislature. Since Southern immigrants outnumbered their Northern counterparts early in the process of settling Kansas, the slave power won a majority of the seats in the legislature, which was seated at the town of LeCompton, and it promptly adopted the civil law of Missouri. As time passed, however, settlers from the Free States began to arrive in substantial number and established towns in the northwestern part of the territory. Then they met in convention and organized a shadow legislature seated at Topeka and it adopted a constitution which prohibited any Africans, whether free or slave, from residing in Kansas. In January 1856, President Pierce issued a proclamation which recognized LeCompton as the legitimate legislature and ordered the shadow legislature at Topeka to disband. When the members of the Topeka legislature refused, supporters of the LeCompton legislature sacked the free soil stronghold of Lawrence, Kansas. In retaliation, John Brown and his five sons appeared on the scene and began killing slave-holding settlers in the countryside.
As these events were debated on the floor of the Senate, Sumner continued to bitterly attack his opponents on a personal level, always returning in his arguments to Senator Butler of South Carolina and Senator Mason of Virginia, and rebuking them for swinging the "overseer's lash" in the Senate, as if it were one of their plantations stocked with slaves. During these debates various senators made motions to expel the Massachusetts senator for perjury and treason, but the motions never came to a vote.
Senator Charles Sumner Attacked
Finally, in May 1856, Sumner spoke for three hours, calling the concept of popular sovereignty a "crime against Kansas" by which the people of the Free states were swindled into accepting the repeal of the Missouri Compromise.
Several days after Sumner's "crime against Kansas" speech, Preston Brooks, a young congressman from South Carolina, cameinto the vestibule of the Senate chamber, carrying a walking stick. The cane had a gold head and tapered from the head down to the end with a weight of about a pound. At 12:45 p.m. the Senate recessed and most of the senators cleared the chamber except for a scattered few. Brooks came down the center aisle and sat down at a desk several seats removed from Sumner who was reading from a pile of documents at his desk. When all the spectators had exited the gallery above the Senate floor, Brooks got up from the desk, came down the aisle, to a position in front of Sumner. When Sumner looked up at Brooks's call of his name, Brooks began furiously whacking at his head with the cane. Sumner tried to rise, but got caught up in his chair. Finally breaking free, Sumner staggered sideways and fell between the desk rows, while Brooks frantically whipped the cane back and forth across his face and shoulders.
Only when the cane splintered into pieces too small for Brooks to handle, did the assault end. When Brooks backed away, Sumner laid motionless on the crimson carpet of the Senate floor. Globs of dark red blood oozed from the cuts and gashes of his face and formed a pool around his head. Slowly, Sumner rolled over on his hands and knees and struggled to rise. Stephen Douglas came into the chamber from the cloakrooms where he had been standing behind the Senate president's chair, but did not approach Sumner. Robert Toombs of Georgia and John Crittenden of Kentucky also appeared in the room but they did not offer Sumner help. By the time Sumner's few friends arrived, Sumner was alone, slumped in his chair, the blood still seeping from his head wounds down his neck, saturating the blue broadcloth coat he wore. The wounds Brooks inflicted on Sumner did not cause permanent physical damage, but they destroyed Sumner's will. Once taken from the Senate chamber, the abolition champion soon left America and traveled through Europe for two years, only returning to his Senate seat in 1859.
The Dred Scott Decision
The breakdown in political civility in the Senate was made permanent in December 1856, when the United States Supreme Court announced its decision in the matter of Dred Scott. Eight years earlier, in 1848, Dred Scott's wife, Harriet, was sued in Missouri state court by a Mr. Emmerson. Emmerson alleged that he had purchased Dred and Harriet from an army officer who had taken the Scotts as slaves from Missouri to army posts in the Free State of Illinois and the Territory of Minnesota, and then returned with them to Missouri. Emmerson's action was tried to a jury who gave verdict for Harriet, but the trial court granted Emmerson a new trial. Harriet appealed from the order granting new trial but lost in the Missouri Supreme Court. Dred Scott then instituted suit against Emmerson in St. Louis Circuit Court. Scott contended that the fact that he and Harriet had been taken voluntarily into Illinois and Minnesota Territory made them free under both Illinois law and the Missouri Compromise. The circuit court agreed with Scott and Emmerson appealed to the Missouri Supreme Court.
The Missouri Supreme Court acknowledged that, as a matter of comity between the courts of the Free and Slave States, many times in the past persons held to service had been adjudged to be free by the courts of the Slave States on the ground that the master had forfeited his chattel interest in such persons because they had been wrongfully held to service in territories or States where slavery was deemed unlawful. Similarly, prior to 1850 at least, many decisions of the Free State courts had held that, in a spirit of comity and in light of the Fugitive Slave Clause in the U.S. Constitution, slaves escaping from a Slave State to a Free State must be returned to the Slave State.
But the laws of other states, the Missouri Supreme Court held, "have no intrinsic right to be enforced beyond the limits of the State for which they were enacted." Since 1850, the Supreme Court observed, the courts of the Free State had repeatedly refused to recognize the legitimacy of the Fugitive Slave Law enacted by Congress as the controlling law of the land. Indeed, the Free State courts by 1856 persistently refused to punish persons who were known to attack federal marshals holding runaway slaves in custody. This conduct on the part of the citizens and courts of the Free states, the Missouri Supreme Court held, justified enforcing the public policy of Missouri which recognized the right of property in persons held to service as paramount.
After the Scotts lost a second time in the Missouri state courts, Emerson sold Dred Scott to John Sanford, a citizen of New York. Scott, alleging that he was a citizen of Missouri, then sued Sanford for his freedom in the Federal District Court in Missouri. Scott based his suit for freedom on the ground that, since the Missouri Compromise had prohibited slavery in that part of the Louisiana territory at the time he had been taken there, he was now free. Sanford opposed the suit on the ground that the federal court lacked jurisdiction because Scott, as an African whose ancestors were brought to America as slaves, could not be a citizen of Missouri. The district court rejected Sanford's argument regarding Scott's lack of standing, but granted judgment for Sanford against Scott's claim that his being taken to Minnesota made him free.
Scott appealed the decision to the United States Supreme Court. Led by Chief Justice Roger Taney, a majority of the Supreme Court, all southerners, refused to recognize that Scott was a citizen of the United States within the meaning of the Constitution and, therefore, the justices held, he could not sue John Sanford in federal court.
Although a free African residing in a state may be recognized by the people of that state to be, like them, a citizen, Taney wrote, he cannot be a citizen of a state, "in the sense in which the word `citizen' is used in the Constitution of the United States." Taney argued that the word "citizen" as used in the constitution is synonymous with the words "people of the United States" which describes the sovereign, the source of the supreme law. In Taney's peculiar view, Dred Scott could not possibly be included as a part of the people of the United States, in 1856, because at the time the people established the constitution as the supreme law of the land, in 1789, Scott's ancestors were considered "beings of an inferior order. . . [so] that they had no rights which the white man was bound to respect." Therefore, Taney concluded, Scott was an alien who lacked the rights, privileges and immunities guaranteed citizens of the United States, one of which was the privilege of bringing suits in its courts.
Chief Justice Taney wasn't satisfied, however, with resolving the matter of Dred Scott by narrowly interpreting the meaning of "citizen of the United States." The Chief Justice and his associates had been in secret communication with James Buchanan who had been elected President in November, 1856. Taney promised Buchanan that the Court would use Dred Scott's case to rule on the issue of whether Congress had the power to make unlawful, white persons forcibly holding black persons to service in the territories of the United States.
In 1820, the Congress had based the enactment of the Missouri Compromise on the express power granted it by the Constitution, "to make all needful rules and regulations respecting the Territory or other property belonging to the United States." Taney, with an apparent majority of the Supreme Court supporting him, rejected the reasonable notion that the Territory clause authorized Congress to enact laws which prohibited citizens of the United States from holding Africans to service in the territories of the United States. The Framers intended the express grant of power to make rules and regulations for the administration of the territories, Taney asserted, to apply only to that particular territory, which, at the time of the Constitution's ratification in 1789, was claimed by the United States. The power to regulate the affairs of new territories acquired after 1789, Taney maintained, springed solely from the express grants of power to make war and treaties, which implied the power to acquire territory.
In the exercise of the latter powers, Taney argued, Congress could make rules and regulations for the new territories acquired by the United States only in a manner which promoted "the interests of the whole people of the United States" on whose behalf the territory was acquired. As the agent of the whole people of the United States, Taney wrote, it was the duty of the general government, in organizing the territories for settlement, to not enact laws which infringed upon the "rights of property of the citizen who might go there to reside." Thus, despite the fact that the laws Congress enacts constitute the supreme law of the land, "anything in the. . . laws of any State to the contrary notwithstanding," in the strange logic of Taney's mind, Congress was powerless to prohibit a person from taking to the territories, a person held to service under only the common law of a Slave state.
Before the whole people made the Constitution the supreme law of the land, in 1789, the sole basis for recognizing a white person's right to hold a black person as property was the common law of the state in which the white person resided. Yet, as the Missouri Supreme Court explained in 1848, with Dred Scott's state court suit for freedom, the state courts had always understood that their respective common laws had "no intrinsic right to be enforced beyond the limits of the State for which they were enacted."
It is true that the whole people did recognize as the supreme law the duty of the general government to deliver up a black person escaping from a Slave State, if the right to hold the black person as property in the state was shown; but the whole people did not write anything in their Constitution that said their general government must recognize the right of white persons to hold black persons as property anywhere else. Chief Justice Taney and his associates could have easily decided, therefore, that, since the general government held the territories in trust to advance the interests of the whole people, it was reasonable for the citizen of a Slave State emigrating to the territories to expect to enjoy the right of property in pigs, cows and horses because the whole people recognized such rights; but it would be unreasonable for the same citizen to expect to enjoy the right of property in man which the whole people did not recognize.
All of the great moments in the Nation's political struggle over slavery—the Missouri Compromise, the Compromise of 1850, the emergence of the doctrine of popular sovereignty, the disintegration of the Democratic and Whig parties, the rise of the Black Republicans and the Dred Scott decision—were certainly nails in the coffin of domestic slavery, but it remained for the nation to produce Abraham Lincoln and place him in command of the executive branch of government, to undo the Union as it was, under the constitution framed by the founders, and to replace it with the constitution we live by today.The founders used the constitution as the means to control the power of democracy, in order to protect the minority from the tyranny of the majority. If the power of the democracy attempts to usurp the supreme law, twist it into something it is not, whatever the moral ground it claims, the retribution it faces is civil war. At its barest root the American Civil War was about the human impulse not to submit willingly to the power of the majority to oppress. This is the irony of the war; the oppressor becoming the oppressed: It is this sacred heritage--the power of the people to change their government--that both black and white Americans can share; and General Lee's great battleflag, though sorely used in latter times, is the most poignant example of it.
Joe Ryan Video
Senator Henry Clay Slavery Debate
Comments On What Caused The Civil War
♦Buzz Queen writes::
Joe Ryan replies:
"State Rights" is an abstract legal theory that the language of the Constitution and the circumstances surrounding its ratification suggest strongly the Founders were attuned to at the time they drafted the document. Abolitionism—the favoring of freedom for the Africans—was a regional phenomenon, limited to the politicians of New England and a few outsiders like Salmon Chase of Ohio and Thaddeus Stevens of Pennsylvania. Had the men of the North, generally, been of the mind-set of Chase, Stevens, and Charles Sumner it seems highly likely that civil war would have been averted, as these men, supported by a majority of their fellows, might well have guided Congress into adopting measures that made it practical for the Southern States to change their domestic policy toward the Africans.
♦Don from New Orleans writes::
Don, to give a lawyer's direct answer to your question, the railroad went to Canada, because the Constitution's Fugitive Slave Clause gave the slaveowner standing in federal court to reclaim the runaway slave in any State where he could be found. See Prigg v. Pennslyvania cited in What Happened in May 1862. As long as the slaveowner can prove his title to the "property" the Federal Court, by virtue of the Fugitive Slave Clause, will order the federal marshall to sieze the "property" and return it to its owner.
There is no connection that I can see between the concept of racism as the cause of the war and the fact the railroad went to Canada.
♦Robert Naranda writes:
Joe Ryan replies:
The gentleman's comment is a bit too obtuse for me to grasp the meaning; perhaps some of you can shed a little light. The first sentence of the piece states the theory of the case, the text provides the argument. This is an ordinary method employed to communicate to readers a writer's point of view.
♦ Ken writes:
Secession was illegal and morally incorrect, fueled as it was by the efforts of slave-holding men to create a white male supremacist state. Your efforts to defend these folks is just sad if it wasn't so dangerous.
Joe Ryan replies:
My Gosh, Ken seems wilfully blind to the fact that the United States as a whole, in 1861, was composed of white male racists and as a whole was therefore morally responsible for causing the Civil War.
♦ Laura writes to say:
♦Theresa writes to say:
I have to disagree with your opening statement that racism caused the civil war “plain and simple.” It’s never that plain and simple. Slavery and the spilt over it caused the civil war, and racism is just a part of slavery. I blame the Dark Ages of Europe for the cause of the Civil War! That’s where lack of respect for human life fell to new lows, allowing for widespread acceptance of slavery, and then it was brought by Spain to the New World, to feed the gold lust of the kings, and then, slowly, over the next 300 years a whole society became dependent on slavery, due in no small measure to laziness on the part of whites who wouldn’t labor in hot climates (plain and simple), and because of fear. And it’s this fear that is the root of racism. The fact slavery became embedded in southern culture forced a split in our country, to the extent that some states wanted to secede, but the President at that time—the guy on the penny—was not having any of that, and that’s how the war got started: not just racism plain and simple.
Joe Ryan replies:
I think your reference to fear hits the mark: the theme—Racism caused the Civil War—hangs on the core issue of what the real fear was. If, instead of throwing insults at each other, back and forth across the aisle for ten years (1850-1860), the Free State senators and representatives in Congress had openly and earnestly debated among themselves how the Nation could absorb the Africans into society, as citizens, while, at the same time, preventing the South from descending into the ugly abyss of economic disaster and social catastrophe, the Civil War might well not have happened. But, except for Daniel Webster once, the Free State members of Congress never even broached the subject of how to move the South from an economy dependent upon African slavery (an institution that still exists today) to an economy based on free labor. The reason for this, it seems to me, was their own feelings of racism. Had the majority of Free State members of Congress been of the mind-set of Charles Sumner, Salmon Chase, and John Brown, for example, this debate most certainly would have occurred, and the White people of the South might well have been soothed with the knowledge that, as the Africans became transformed into American citizens, their world would not collapse into chaos, and they would have probably stayed the course.
♦ Ray writes to say:
I can’t disagree with you as racism is defined as thinking your race is better than another, yet I’ve always thought of the civil war as the cause of racism as we know it today. Certainly the agricultural and industrial forces in the south and parts of the north that relied on slavery were the biggest cause of the war while slavery the engine that drove their production, was a resource. Using that scenario I’d have to lay the war on greed.
Joe Ryan replies:
We appreciate your view. Whether then or now, the meaning of racism hasn’t changed: It is the human attitude that influences a class of people, who perceive themselves superior in character or intelligence, to shun social contact with another class. Infected with this attitude it was as impossible for the whites of the North to live with blacks, in 1861, as it was for the whites of the South. The new policy of the Federal Government, to restrict slavery to the existing states, meant that the economic engine of slavery was doomed to sputter out; leaving the South saddled with an alien population that would have no means of supporting itself. Rather than accept the Government’s policy, South Carolina and the Gulf States chose the option of secession in the forlorn hope of maintaining the power of their class.
♦ Wayne writes:
You are so full of .... You twist the real history to fit your own agenda.
Joe Ryan replies:
No one can read my writing and not understand that I stand on General Lee’s side of the case, and on Virginia’s, the Mother of States. But not Alabama's, Alabama's case requires a different lawyer. Whose side do you stand on?
What do you viewers think?
Reply to this thread:
Battle of Gettysburg
More To Explore | http://www.americancivilwar.com/authors/Joseph_Ryan/Articles/Causes-Civil-War/What-Caused-the-American-Civil-War.html | 13 |
145 | Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body (such as a ball, tire, or wheel) rolls on a surface. It is mainly caused by non-elastic effects, that is, not all the energy that is needed for deformation (or movement) of the wheel, roadbed, etc. is recovered when the pressure is removed. Two forms of this are hysteresis losses, see below, and permanent (plastic) deformation of the object or the surface (e.g. soil). Another cause of rolling resistance lies in the slippage between the wheel and the surface, which dissipates energy. Note that only the last one of these effects involves friction, therefore the name "rolling friction" is to some extent a misnomer.
In analogy with sliding friction, rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction.
Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac. Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter, speed load on wheel, surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete.
Primary cause
A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber.
- -- National Academy of Sciences
This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate.
The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion.
Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica. Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction. Note that railroads also have hysteresis in the roadbed structure.
"Rolling resistance" has different definitions
In the broad sense, specific "rolling resistance" (for vehicles) is the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied. In other words the vehicle would be coasting if it were not for the force to maintain constant speed. An example of such usage for railroads is . This broad sense includes wheel bearing resistance, the energy dissipated by vibration and oscillation of both the roadbed and the vehicle, and sliding of the wheel on the roadbed surface (pavement or a rail).
But there is an even broader sense which would include energy wasted by wheel slippage due to the torque applied from the engine. This includes the increased power required due to the increased velocity of the wheels where the tangential velocity of the driving wheel(s) becomes greater than the vehicle speed due to slippage. Since power is equal to force times velocity and the wheel velocity has increased, the power required has increased accordingly.
The pure "rolling resistance" for a train is that which happens due to deformation and possible minor sliding at the wheel-road contact. For a rubber tire, an analogous energy loss happens over the entire tire, but it is still called "rolling resistance". In the broad sense, "rolling resistance" includes wheel bearing resistance, energy loss by shaking both the roadbed (and the earth underneath) and the vehicle itself, and by sliding of the wheel, road/rail contact. Railroad textbooks seem to cover all these resistance forces but do not call their sum "rolling resistance" (broad sense) as is done in this article. They just sum up all the resistance forces (including aerodynamic drag) and call the sum basic train resistance (or the like).
Since railroad rolling resistance in the broad sense may be a few times larger than just the pure rolling resistance reported values may be in serious conflict since they may be based on different definitions of "rolling resistance". The train's engines must of course, provide the energy to overcome this broad-sense rolling resistance.
For highway motor vehicles, there is obviously some energy dissipating in the shaking the roadway and earth beneath, shaking of the vehicle itself, and sliding of the tires. But other than the additional power required due to torque and wheel bearing friction, non-pure rolling resistance doesn't seem to have been investigated, possibly because the "pure" rolling resistance of a rubber tire is several times higher than the neglected resistances.
Rolling resistance coefficient
The "rolling resistance coefficient", is defined by the following equation:
- is the rolling resistance force (shown in figure 1),
- is the dimensionless rolling resistance coefficient or coefficient of rolling friction (CRF), and
- is the normal force, the force perpendicular to the surface on which the wheel is rolling.
is the force needed to push (or tow) a wheeled vehicle forward (at constant speed on the level with no air resistance) per unit force of weight. It's assumed that all wheels are the same and bear identical weight. Thus: means that it would only take 0.01 pound to tow a vehicle weighing one pound. For a 1000 pound vehicle it would take 1000 times more tow force or 10 pounds. One could say that is in lb(tow-force)/lb(vehicle weight. Since this lb/lb is force divided by force, is dimensionless. Multiply it by 100 and you get the percent (%)of the weight of the vehicle required to maintain slow steady speed. is often multiplied by 1000 to get the parts per thousand which is the same as kilograms (kg force) per metric ton (tonne = 1000 kg ) which is the same as pounds of resistance per 1000 pounds of load or Newtons/kilo-Newton, etc. For the US railroads, lb/ton has been traditionally used which is just . Thus they are all just measures of resistance per unit vehicle weight. While they are all "specific resistances" sometimes they are just called "resistance" although they are really a coefficient (ratio)or a multiple thereof. If using pounds or kilograms as force units, mass is equal to weight (in earth's gravity a kilogram a mass weighs a kilogram and exerts a kilogram of force) so one could claim that is also the force per unit mass in such units. The SI system would use N/tonne (N/T) which is and is force per unit mass, where g is the acceleration of gravity in SI units (meters per second square).
The above shows resistance proportional to but does not explicitly show any variation with speed, loads, torque, surface roughness, diameter, tire inflation/wear, etc. because itself varies with those factors. It might seem from the above definition of that the rolling resistance is directly proportional to vehicle weight but it is not.
There are at least two popular models for calculating rolling resistance.
- "Rolling resistance coefficient (RRC). The value of the rolling resistance force divided by the wheel load. The Society of Automotive Engineers (SAE) has developed test practices to measure the RRC of tires. These tests (SAE J1269 and SAE J2452) are usually performed on new tires. When measured by using these standard test practices, most new passenger tires have reported RRCs ranging from 0.007 to 0.014." In the case of bicycle tires, values of 0.0025 to 0.005 are achieved. These coefficients are measured on rollers, with power meters on road surfaces, or with coast-down tests. In the latter two cases, the effect of air resistance must be subtracted or the tests performed at very low speeds.
- The coefficient of rolling resistance b, which has the dimension of length, is approximately (due to the small-angle approximation of ) equal to the value of the rolling resistance force times the radius of the wheel divided by the wheel load.
- ISO 18164:2005 is used to test rolling resistance in Europe.
The results of these tests can be hard for the general public to obtain as manufacturers prefer to publicize "comfort" and "performance".
Physical formulas
- is the sinkage depth
- is the diameter of the rigid wheel
Empirical formula for Crr for cast iron mine car wheels on steel rails.
- is the wheel diameter in in.
- is the load on the wheel in lbs.
As an alternative to using one can use which is a different rolling resistance coefficient or coefficient of rolling friction with dimension of length, It's defined by the following formula:
- is the rolling resistance force (shown in figure 1),
- is the wheel radius,
- is the rolling resistance coefficient or coefficient of rolling friction with dimension of length, and
- is the normal force (equal to W, not R, as shown in figure 1).
The above equation, where resistance is inversely proportional to radius r. seems to be based on the discredited "Coulomb's law". See #Depends on diameter. Equating this equation with the force per the #Rolling resistance coefficient, and solving for b, gives b = Crr·r. Therefore, if a source gives rolling resistance coefficient (Crr) as a dimensionless coefficient, it can be converted to b, having units of length, by multiplying Crr by wheel radius r.
Rolling resistance coefficient examples
Table of rolling resistance coefficient examples:
|0.0003 to 0.0004||"Pure rolling resistance" Railroad steel wheel on steel rail|
|0.0010 to 0.0024||0.5 mm||Railroad steel wheel on steel rail. Passenger rail car about 0.0020|
|0.001 to 0.0015||0.1 mm||Hardened steel ball bearings on steel|
|0.0019 to 0.0065||Mine car cast iron wheels on steel rail|
|0.0022 to 0.005||Production bicycle tires at 120 psi (8.3 bar) and 50 km/h (31 mph), measured on rollers|
|0.0025||Special Michelin solar car/eco-marathon tires|
|0.005||Dirty tram rails (standard) with straights and curves|
|0.0045 to 0.008||Large truck (Semi) tires|
|0.0055||Typical BMX bicycle tires used for solar cars|
|0.0062 to 0.015||Car tire measurements|
|0.010 to 0.015||Ordinary car tires on concrete|
|0.0385 to 0.073||Stage coach (19th century) on dirt road. Soft snow on road for worst case.|
|0.3||Ordinary car tires on sand|
For example, in earth gravity, a car of 1000 kg on asphalt will need a force of around 100 newtons for rolling (1000 kg × 9.81 m/s2 × 0.01 = 98.1 N).
Depends on diameter
Stagecoaches and railroads (diameter)
According to Dupuit (1837), rolling resistance (of wheeled carriages with wooden wheels with iron tires) is approximately inversely proportional to the square root of wheel diameter. This rule has been experimentally verified for cast iron wheels (8" - 24" diameter) on steel rail and for 19th century carriage wheels. But there are other tests of carriage wheels that do not agree. Theory of a cylinder rolling on an elastic roadway also gives this same rule These contradict earlier (1785) tests by Coulomb of rolling wooden cylinders where Coulomb reported that rolling resistance was inversely proportional to the diameter of the wheel (known as "Coulomb's law"). This disputed (or wrongly applied) -"Coulomb's law" is still found in handbooks, however.
Pneumatic tires (diameter)
For pneumatic tires on hard pavement, it is reported that the effect of diameter on rolling resistance is negligible (within a practical range of diameters). This is another example of the inapplicability of "Coulomb's law".
Depends on applied torque
The driving torque to overcome rolling resistance and maintain steady speed on level ground (with no air resistance) can be calculated by:
- is the linear speed of the body (at the axle), and
- its rotational speed.
All wheels (torque)
"Applied torque" may either be driving torque applied by a motor (often through a transmission) or a braking torque applied by brakes(including regenerative braking). Such torques results in energy dissipation (above that due to the basic rolling resistance of a freely rolling, non-driven, non-braked wheel). This additional loss is in part due to the fact that there is some slipping of the wheel, and for pneumatic tires, there is more flexing of the sidewalls due to the torque. Slip is defined such that a 2% slip means that the circumferential speed of the driving wheel exceeds the speed of the vehicle by 2%.
A small percentage slip can result in a much larger percentage increase in rolling resistance. For example, for pneumatic tires, a 5% slip can translate into a 200% increase in rolling resistance. This is partly because the tractive force applied during this slip is many times greater than the rolling resistance force and thus much more power per unit velocity is being applied (recall power = force x velocity so that power per unit of velocity is just force). So just a small percentage increase in circumferential velocity due to slip can translate into a loss of traction power which may even exceed the power loss due to basic (ordinary) rolling resistance. For railroads, this effect may be even more pronounced due to the low rolling resistance of steel wheels.
Railroad steel wheels (torque)
Slip (also known as creep)is normally roughly directly proportional to tractive effort. An exception is if the tractive effort is so high that the wheel is close to substantial slipping (more than just a few percent as discussed above), then slip rapidly increases with tractive effort and is no longer linear. With a little higher applied tractive effort the wheel spins out of control and the adhesion drops resulting in the wheel spinning even faster. This is the type of slipping that is observable by eye—the slip of say 2% for traction is only observed by instruments. Such rapid slip may result in excessive wear or damage.
Pneumatic tires (torque)
Rolling resistance greatly increases with applied torque. At high torques, which apply a tangential force to the road of about half the weight of the vehicle, the rolling resistance may triple (a 200% increase). This is in part due to a slip of about 5%. See #All wheels| for an explanation of why this is reasonable. The rolling resistance increase with applied torque is not linear, but increases at a faster rate as the torque becomes higher.
Depends on wheel load
Railroad steel wheels (load)
The #Rolling resistance coefficient, Crr, significantly decreases as the weight of the rail car per wheel increases. For example, an empty Russian freight car had about twice the Crr as loaded car (Crr=0.002 vs. Crr=0.001). This same "economy of scale" shows up in testing of mine rail cars. The theoretical Crr for a rigid wheel rolling on an elastic roadbed shows Crr inversely proportional to the square root of the load.
If Crr is itself dependent on wheel load per an inverse square-root rule, then for an increase in load of 2% only a 1% increase in rolling resistance occurs.
Pneumatic tires (load)
For pneumatic tires, the direction of change in Crr (#Rolling resistance coefficient) depends on whether or not tire inflation is increased with increasing load. It's reported that if inflation pressure is increased with load according to an (undefined) "schedule", then a 20% increase in load decreases Crr by 3%. But if the inflation pressure is not changed, then a 20% increase in load results in a 4% increase in Crr. Of course this will increase the rolling resistance by 20% due to the increase in load plus 1.2 x 4% due to the increase in Crr resulting in a 24.8% increase in rolling resistance.
Depends on curvature of roadway
When a vehicle (motor vehicle or railroad train) goes around a curve, rolling resistance usually increases. If the curve is not banked so as to exactly counter the centrifugal force with an equal and opposing centripetal force due to the banking, then there will be a net unbalanced sideways force on the vehicle. This will result in increased rolling resistance. Banking is also known as "superelevation" or "cant" (not to be confused with rail cant of a rail). For railroads, this is called curve resistance but for roads it has (at least once) been called rolling resistance due to cornering.
Sound effects
Rolling friction generates sound (vibrational) energy, as mechanical energy is converted to this form of energy due to the friction. One of the most common examples of rolling friction is the movement of motor vehicle tires on a roadway, a process which generates sound as a by-product. The sound generated by automobile and truck tires as they roll (especially noticeable at highway speeds) is mostly due to the percussion of the tire treads, and compression (and subsequent decompression) of air temporarily captured within the treads.
Factors that contribute in tires
Several factors affect the magnitude of rolling resistance a tire generates:
- As mentioned in the introduction: wheel radius, forward speed, surface adhesion, and relative micro-sliding.
- Material - different fillers and polymers in tire composition can improve traction while reducing hysteresis. The replacement of some carbon black with higher-priced silica–silane is one common way of reducing rolling resistance. The use of exotic materials including nano-clay has been shown to reduce rolling resistance in high performance rubber tires. Solvents may also be used to swell solid tires, decreasing the rolling resistance.
- Dimensions - rolling resistance in tires is related to the flex of sidewalls and the contact area of the tire For example, at the same pressure, wider bicycle tires flex less in sidewalls as they roll and thus have lower rolling resistance (although higher air resistance).
- Extent of inflation - Lower pressure in tires results in more flexing of sidewalls and higher rolling resistance. This energy conversion in the sidewalls increases resistance and can also lead to overheating and may have played a part in the infamous Ford Explorer rollover accidents.
- Over inflating tires (such a bicycle tires) may not lower the overall rolling resistance as the tire may skip and hop over the road surface. Traction is sacrificed, and overall rolling friction may not be reduced as the wheel rotational speed changes and slippage increases.
- Sidewall deflection is not a direct measurement of rolling friction. A high quality tire with a high quality (and supple) casing will allow for more flex per energy loss than a cheap tire with a stiff sidewall. Again, on a bicycle, a quality tire with a supple casing will still roll easier than a cheap tire with a stiff casing. Similarly, as noted by Goodyear truck tires, a tire with a "fuel saving" casing will benefit the fuel economy through many tread lives (i.e. retreading), while a tire with a "fuel saving" tread design will only benefit until the tread wears down.
- In tires, tread thickness and shape has much to do with rolling resistance. The thicker and more contoured the tread, the higher the rolling resistance Thus, the "fastest" bicycle tires have very little tread and heavy duty trucks get the best fuel economy as the tire tread wears out.
- Diameter effects seem to be negligible, provided the pavement is hard and the range of diameters is limited. See #Depends on diameter
- Virtually all world speed records have been set on relatively narrow wheels, probably because of their aerodynamic advantage at high speed, which is much less important at normal speeds.
- Temperature: with both solid and pneumatic tires, rolling resistance has been found to decrease as temperature increases (within a range of temperatures: i.e. there is an upper limit to this effect) For a rise in temperature from 30 deg. C to 70 deg. C the rolling resistance decreased by 20-25% It's claimed that racers heat their tire before racing.
Railroads: Components of rolling resistance
One may define rolling resistance in the broad sense as the sum of components): 1. Wheel bearing resistance 2. Pure rolling resistance 3.Sliding of the wheel on the rail 4. Loss of energy to the roadbed (and earth) 5. Loss of energy to oscillation of railway rolling stock.
The wheel bearing resistance may be reported as a specific resistance at the wheel rim, for example as a Crr. Railroads normally use roller bearings which are either cylindrical (Russia) or tapered (United States). The specific rolling resistance (in of Russian bearings varies with both wheel loading and speed. It is lowest with high axle loads and intermediate speeds of 60–80 km/h with a Crr of 0.00013 (axle load of 21 tonnes). For empty freight cars with axle loads of 5.5 tonnes, Crr goes up to 0.00020 at 60 km/h but at a low speed of 20 km/h it increases to 0.00024 and at a high speed (for freight trains) of 120 km/h it is 0.00028. The Crr obtained above is added to the Crr of the other components to obtain the total Crr for the wheels.
Comparing rolling resistance of highway vehicles and trains
While the specific rolling resistance of a train is far less than an automobile or truck in terms of resistance force per ton, this does not necessarily mean that the resistance force per passenger or per net ton of freight is less. It all depends on the vehicle weight per passenger or per net ton transported. Thus one needs to know the rolling resistance per passenger (or per net ton) to make such comparisons.
For 1975, Amtrak passenger trains weighed a little over 7 tones per passenger while automobiles weighed only a little over one ton per passenger. To find the rolling resistance per person one multiples the pounds(force) per ton (2000 times the rolling resistance coefficient) by the tons per passenger. This means that even if the rolling coefficient is several times greater for the auto than for the train, then after multiplication to get pounds/passenger, there is not a lot of difference between the two values (of lb/passenger). Thus there may not be a large difference in the rolling resistance energy used to transport a person by rail as compared to auto.
See also
- Coefficient of friction
- Low-rolling resistance tires
- Maglev (Magnetic Levitation, the elimination of rolling and thus rolling resistance)
- Rolling element bearing
- Peck, William Guy (1859). Elements of Mechanics: For the Use of Colleges, Academies, and High Schools. A.S. Barnes & Burr: New York. p. 135. Retrieved 2007-10-09.
- Hibbeler, R.C. (2007). Engineering Mechanics: Statics & Dynamics (Eleventh ed.). Pearson, Prentice Hall. pp. 441–442.
- "User guide of CONTACT, Vollebregt & Kalker's rolling and sliding contact model. Technical report TR09-03 version v12.2. VORtech, 2012.". Retrieved 2012-06-02.
- A handbook for the rolling resistance of pneumatic tires Clark, Samuel Kelly; Dodge, Richard N. 1979 (http://deepblue.lib.umich.edu/handle/2027.42/4274)
- "Tires and Passenger Vehicle Fuel Economy: Informing Consumers, Improving Performance -- Special Report 286. National Academy of Sciences, Transportation Research Board, 2006". Retrieved 2007-08-11.
- Tyres-Online: The Benefits of Silica in Tyre Design
- Астахов, p.85
- Деев, p. 79. Hay, p.68
- Астахов, Chapt. IV, p. 73+; Деев, Sect. 5.2 p. 78+; Hay, Chapt. 6 "Train Resistance" p. 67+
- Астахов, Fig. 4.14, p. 107
- If one were to assume that the resistance coefficients (Crr) for motor vehicles were the same as for trains, then for trains the neglected resistances taken together have a Crr of about 0.0004 (see Астахов, Fig. 4.14, p.107 at 20km/hr and assume a total Crr =0.0010 based on Fig. 3.8, p.50 (plain bearings) and adjust for roller bearings based on a delta Crr of 0.00035 as read from Figs. 4.2 and 4.4 on pp. 74, 76). Compare this Crr of 0.0004 to motor vehicle tire Crr's of at least 10 times higher per "Rolling resistance coefficient examples" in this article
- kgf/tonne is used by Астахов throughout his book
- Деев uses N/T notation. See pp. 78-84.
- Hersey, equation (2), p. 83
- Астахов, p. 81.
- Hay, Fig. 6-2 p.72(worst case shown of 0.0036 not used since it is likely erroneous)
- Астахов, Figs. 3.8, 3.9, 3.11, pp. 50-55; Figs. 2.3, 2.4 pp. 35-36. (Worst case is 0.0024 for an axle load of 5.95 tonnes with obsolete plain (friction --not roller) bearings
- Астахов, Fig. 2.1, p.22
- "Coefficients of Friction in Bearing". Coefficients of Friction. Retrieved 7 February 2012.
- Hersey, Table 6, p.267
- Roche, Schinkel, Storey, Humphris & Guelden, "Speed of Light." ISBN 0-7334-1527-X
- Crr for large truck tires per Michelin
- Green Seal 2003 Report
- Gillespie ISBN 1-56091-199-9 p117
- Baker, Ira O., "Treatise on roads and pavements". New York, John Wiley, 1914. Stagecoach: Table 7, p. 28. Diameter: pp. 22-23. This book reports a few hundred values of rolling resistance for various animal-powered vehicles under various condition, mostly from 19th century data.
- Hersey, subsection: "End of dark ages", p.261
- Hersey, subsection: "Static rolling friction", p.266.
- Williams, 1994, Ch. "Rolling contacts", eq. 11.1, p. 409.
- Hersey, subsection: "Coulomb on wooden cylinders", p. 260
- U.S. National Bureau of Standards, Fig. 1.13
- Some[who?] think that smaller tire wheels, all else being equal, tend to have higher rolling resistance than larger wheels. In some laboratory tests, however, such as Greenspeed test results (accessdate = 2007-10-27), smaller wheels appeared to have similar or lower losses than large wheels, but these tests were done rolling the wheels against a small-diameter drum, which would theoretically remove the advantage of large-diameter wheels, thus making the tests irrelevant for resolving this issue. Another counter example to the claim of smaller wheels having higher rolling resistance can be found in the area of ultimate speed soap box derby racing. In this race, the speeds have increased as wheel diameters have decreased by up to 50%. This might suggest that rolling resistance may not be increasing significantly with smaller diameter within a practical range, if any other of the many variables involved have been controlled for. See talk page.
- Gérard-Philippe Zéhil, Henri P. Gavin, Three-dimensional boundary element formulation of an incompressible viscoelastic layer of finite thickness applied to the rolling resistance of a rigid sphere, International Journal of Solids and Structures, Volume 50, Issue 6, 15 March 2013, Pages 833-842, ISSN 0020-7683, 10.1016/j.ijsolstr.2012.11.020.(journal article;author's page (1);author's page (2))
- Gérard-Philippe Zéhil, Henri P. Gavin, Simple algorithms for solving steady-state frictional rolling contact problems in two and three dimensions, International Journal of Solids and Structures, Volume 50, Issue 6, 15 March 2013, Pages 843-852, ISSN 0020-7683, 10.1016/j.ijsolstr.2012.11.021.(journal article;author's page (1);author's page (2))
- Gérard-Philippe Zéhil, Henri P. Gavin, Simplified approaches to viscoelastic rolling resistance, International Journal of Solids and Structures, Volume 50, Issue 6, 15 March 2013, Pages 853-862, ISSN 0020-7683, 10.1016/j.ijsolstr.2012.09.025.(journal article;author's page (1);author's page (2))
- Roberts, Fig. 17: "Effect of torque transmission on rolling resistance", p. 71
- Деев, p.30 including eq. (2.7) and Fig. 2.3
- Астахов, Figs. 3.8, 3.9, 3.11, pp. 50-55. Hay, Fig. 60-2, p. 72 shows the same phenomena but has higher values for Crr and not reported here since the railroads in 2011 . were claiming about the same value as Астахов
- Hersey, Table 6., p. 267
- Per this assumption, where is the rolling resistance force and is the normal load force on the wheel due to vehicle weight, and is a constant. It can be readily shown by differentiation of with respect to using this rule that
- Roberts, pp. 60-61.
- C. Michael Hogan, Analysis of Highway Noise, Journal of Soil, Air and Water Pollution, Springer Verlag Publishers, Netherlands, Volume 2, Number 3 / September, 1973
- Gwidon W. Stachowiak, Andrew William Batchelor, Engineering Tribology, Elsevier Publisher, 750 pages (2000) ISBN 0-7506-7304-4
- "Schwalbe Tires: Rolling Resistance".
- The Recumbent Bicycle and Human Powered Vehicle Information Center
- U.S National Bureau of Standards p.? and Williams p.?
- Roberts, "Effect of temperature", p.59
- Астахов, p. 74, Although Астахов list these components, he doesn't give the sum a name.
- Шадур. Л. А. (editor). Вагоны (Russian)(Railway cars). Москва, Транспорт, 1980. pp. 122 and figs. VI.1 p. 123 VI.2 p. 125
- Association of American Railroads, Mechanical Division "Car and Locomotive Encyclopedia", New York, Simmons-Boardman, 1974. Section 14: "Axle journals and bearings". Almost all of the ads in this section are for the tapered type of bearing.
- Астахов, Fig 4.2, p. 76
- Statistics of railroads of class I in the United States, Years 1965 to 1975: Statistical summary. Washington DC, Association of American Railroads, Economics and Finance Dept. See table for Amtrak, p.16. To get the tons per passenger divide ton-miles (including locomotives) by passenger-miles. To get tons-gross/tons-net, divide gross ton-mi (including locomotives) (in the "operating statistics" table by the revenue ton-miles (from the "Freight traffic" table)
- Астахов П.Н. (Russian) "Сопротивление движению железнодорожного подвижного состава" (Resistance to motion of railway rolling stock) Труды ЦНИИ МПС (ISSN 0372-3305). Выпуск 311 (Vol. 311). - Москва: Транспорт, 1966. – 178 pp. perm. record at UC Berkeley (In 2012, full text was on the Internet but the U.S. was blocked)
- Деев В.В., Ильин Г.А., Афонин Г.С. (Russian) "Тяга поездов" (Traction of trains)
Учебное пособие. - М.: Транспорт, 1987. - 264 pp.
- Hay, William W. "Railroad Engineering" New York, Wiley 1953
- Hersey, Mayo D., "Rolling Friction" Transactions of the ASME, April 1969 pp. 260–275 and Journal of Lubrication Technology, January 1970, pp. 83–88 (one article split between 2 journals) Except for the "Historical Introduction" and a survey of the literature, it's mainly about lab. testing of mine railroad cast iron wheels of diameters 8" to 24" done in the 1920s (almost a half century delay between experiment and publication).
- Hoerner, Sighard F., "Fluid dynamic drag", published by the author, 1965. (Chapt. 12 is "Land-Borne Vehicles" and includes rolling resistance (trains, autos, trucks).
- Roberts, G. B., "Power wastage in tires", International Rubber Conference, Washington, D.C. 1959.
- U.S National Bureau of Standards, "Mechanics of Pneumatic Tires", Monograph #132, 1969-1970.
- Williams J A, "Engineering tribology" Oxford University Press, 1994.
|Wikimedia Commons has media related to: Rolling resistance| | http://en.wikipedia.org/wiki/Rolling_resistance | 13 |
92 | A wave is the motion of a disturbance in a medium. The medium for ocean waves is water, for example. When a string, fixed at both ends, is given a vertical hit by a stick, a dent appears in it that travels along the string. When it reaches an end point, it reflects and reverses and travels toward the other end. The following figure shows the motion of a single disturbance.
If you hold end A of the above string and try to give it a continuous up-and-down motion, with a little adjustment of the pace of oscillations, you can make at least the following waveforms:
Each wave travels from A to B and gets reflected at B. When each reflected wave reaches point A, it gets reflected again and the process repeats. Of course the hand motion keeps putting energy into the system by constantly generating waves that are in phase with the returned waves creating the above waveforms. Such waves are called "standing waves." The subject of waves is lengthy and mathematically very involved. For now, the above is sufficient to give you an idea of wavelength and standing waves.
Types of Waves:
There are two classifications: one classification is: mechanical and electromagnetic.
Mechanical waves require matter for their transmission. Sound waves, ocean waves, and waves on a guitar string are examples..
Electromagnetic waves can travel both in vacuum and matter. If light could not travel in vacuum, we would not see the Sun. Light is an electromagnetic wave. Radio waves, Ultraviolet waves, and infrared waves are all electromagnetic waves.
Waves can also be classified as transverse and longitudinal. (See Figure)
For a transverse wave the disturbance direction is perpendicular to the propagation direction. Water waves are transverse. Waves on guitar strings are also transverse.
For a longitudinal wave the disturbance direction is parallel to the propagation direction. Waves on a slinky as well as sound waves are longitudinal.
Frequency ( f ):
The frequency ( f ) of a wave is the number of full waveforms generated per second. This is the same as the number of repetitions per second or the number of oscillations per second. The unit is (1/s), or (s -1), or (Hz). If the repeated waveforms have a sine-function shape, the waves are called harmonic waves.
Period ( T ):
Period is the number of seconds per waveform, or the number of seconds per oscillation. It is clear that frequency and period are reciprocals.
T = 1 / f
Also recall the useful relation between the frequency ( f ) and the angular speed ( ω ). ω = 2π f. ω is the number of radians per second, but f is the number of turns (cycles) per second. Each cycle is 2π radians
Wavelength ( λ ):
Wavelength ( λ ) is the distance between two successive points on a wave that are in the same state of oscillation. The distance from A to B, in the following figure, is equal to one wavelength ( λ ) because B is the first point after A that is in the same state of oscillation as A is.
Wavelength may also be defined as the distance between a peak to the next peak, or the distance between a trough to the next trough, as shown.
Wave Speed ( v ):
The wave speed is the distance a wave travels per second. Since each wave source generates ( f ) wavelengths per second and each wavelength is ( λ ) units of length long; therefore the wave speed formula is:
v = f λ.
Example 1: The speed of sound waves at STP conditions is 331 m/s. Calculate the wavelength of a sound wave which frequency is 1324 Hz at STP conditions.
Solution: Using v = f λ, & solving for λ, yields: λ = v / f ; λ = (331m/s) / (1324/s) = 0.250m
A Good Link to Try: http://surendranath.tripod.com/Applets.html .
A Vibrating String:
A stretched string fixed at two points and brought into oscillations such as a violin string has waves on it that keep going back-and-forth between the two fixed points. If a string is observed closely or by a magnifying glass, at times it appears as shown:
The higher the pitch of the note it is playing, the higher the frequency of oscillations and the shorter the wavelength or the sine-waves that appear on it. The waveforms appear to be stationary, but in reality they are not. They are called "standing waves."
Nodes are points of the medium (the string) that at zero oscillation state and antinodes are points at maximum oscillation state.
Now, look at the following example:
In a 60.0-cm long violin string, three antinodes are observed. Find the wavelength of the waves on it.
Solution: Each loop has a length of (60.0cm / 3) = 20.0cm
Each wavelength has two of such loops; therefore,
λ = 40.0cm
Speed of waves in a string depends on the tension in the string as well as the mass per unit length of it as explained below:
Speed of Waves in a Stretched String:
The more a string is stretched, the faster waves travel in it. The formula that relates the tension ( F ) in the string and the wave speed v is:
Proof: The proof of this formula is as follows:
If we model the peak of a wave as it passes through the medium (the string) at speed v as shown, we may think that the peak segment is under a tensile force F that pulls it in opposite directions. The hump can be looked at as a portion of a circle from A to B with its center at C. The hump is being pulled down by a force of 2Fsinq . This pulling down force passes through the center and therefore acts as a centripetal force for the segment that is equal to Mv2/R ; therefore, 2Fsinq = Mv2/R and since sinq = q for small angles in radians, the formula becomes:
2Fq = Mv2/R (1)
where M = the mass of the string segment
If we calculate the mass of the hump segment M, it results in M = 2μRθ. This is easy to understand because is the length of the hump is Rθ and let the mass per unit length of the string by defined as μ. In other words, μ= mass / length. Equation (1) takes the form
2Fq = 2μRθ v2/R (2)
and solving for v results in
Answer the following:
What is F and what is v ?
Why are the angles marked are equal?
What happens to (Fcosq )'s ?
What is the total downward force that is trying to bring the string to normal as the wave passes through?
What is the length of arc AB that has a mass of M?
Example 3: A 120.-cm guitar string is under a tension of 400.N. The mass of the string is 0.480 grams. Calculate (a) the mass per unit length of the string and (b) the speed of waves in it. (c) In a diagram show the number of (1/2)λs that appear in this string if it is oscillating at a frequency of 2083Hz. Note: With 1.20m in the denominator of the first line of solution, the rest of the problem is correct.
Solution: (a) μ = M / L ; μ = (0.480x10-3kg) / 1.20m = 4.00x10-4 kg/m (Correct!)
(b) v = (F/μ)1/2 ; v = (400.N / 4.00x10-4 kg/m)1/2 = 1000 m/s (3 sig. fig.)
(c) v = f λ ; λ = v / f ; λ = (1000.m/s) / (2083 /s) = 0.480m
(1/2)λ = 0.480m / 2 = 0.240m
In a length of 1.20m , 5.00 (λ/2)s do fit, as shown:
Traveling Harmonic Waves:
We are interested in finding a formula that calculates the y-value of any point in a one dimensional medium as harmonic waves are traveling in that 1-D medium at speed v. This means that at any point x and at any instant t, we want y(x,t). For harmonic waves, such equation has the general form:
y(x,t) = A sin(kx - ωt + φ)
k is called the wave number and its unit in SI is m-1. The above equation is for 1-D harmonic waves traveling along the ( +x) axis. If the waves are moving along the (-x) axis, the appropriate equation is:
y(x,t) = A sin(kx + ωt + φ)
If y = 0 at t = 0 and x = 0, then φ = 0. It is important to distinguish between the wave propagation velocity v (along the x-axis) and the medium's particles velocity vy (along the y-axis) as transverse waves pass by particles of the medium. The wave propagation velocity is V = f λ , but the particles velocity in the y-direction is vy = ∂y / ∂ t.
Example 4: The equation of certain traveling waves is y(x,t) = 0.0450 sin(25.12x - 37.68t - 0.523) where x and y are in meters, and t in seconds. Determine the following: (a) Amplitude, (b) wave number, (c) wavelength, (d) angular frequency, (e) frequency, (f) phase angle, (g) the wave propagation speed, (h) the expression for the medium's particles velocity as the waves pass by them, and (i) the velocity of a particle that is at x=3.50m from the origin at t=21.0s.
Solution: Comparing this equation with the general form, results in
(a) A = 0.0450m (b) k = 25.12m-1 (c) λ = (2π / k) = 0.250m (d) ω = 37.68 rd/s
(e) f = (ω / 2π) = 6.00Hz (f) = -0.523 rd (g) v = fλ = (6.00 Hz)(0.250m) = 1.50 m/s
(h) vy = ∂y / ∂ t = 0.0450(- 37.68) cos (25.12x - 37.68t - 0.523)
( i ) vy(3.50m , 21.0s) = 0.0450(- 37.68) cos (25.12*3.50 - 37.68*21.0 - 0.523) = -1.67 m/s
Standing Harmonic Waves:
When two harmonic waves of equal frequency and amplitude travel through a medium in opposite directions, they combine and the result is a standing wave.
If the equation of the wave going to the right is
A sin(kx - ωt)
and that of one going to the left is
A sin(kx + ωt),
we may add the two to obtain the equation of the combination wave (Gray) as
y(x,t) = A sin(kx - ωt) + A sin(kx + ωt)
Using the trigonometric identity:
sin A+sin B = 2sin [(A+B)/2] .cos [(A-B)/2],
y ( x, t ) = 2A cos(ωt) * sin kx
In this equation, sinkx determines the shape of the standing wave and
2A cos(ωt) determines how its amplitude varies with time.
Resonant Standing Waves on A String:
| If the medium in which
standing waves are formed is infinite, there is no restriction on the
wavelength or frequency of the waves. In case of a bound medium,
such as a string fixed at both ends, standing waves can only be formed for
a set of discrete frequencies or wavelengths.
If you hold one end of a rope, say 19ft long, and tie the other end of it to a wall 16ft away from you, there will be a slack of 3ft in it allowing you to swing it up and down and make waves. By adjusting the frequency of the oscillatory motion you give to the end you are holding, you can generate a sequence of waves in the rope that will have an integer number of full loops in it. For any frequency (f), there is a corresponding wavelength (λ ) such that v = f λ .
It is very clear from this equation that, since the waves speed, v , in a given medium is constant, the product f λ is also constant, and if you increase the frequency, the wavelength of the waves in the rope has to decrease. Of course, for resonance, the values of such frequencies, as was mentioned, are discrete, and so are their corresponding wavelengths. All you need to do is to adjust your hand's oscillations for each case to observe a full number of loops in the rope between you and the wall. It is also clear from Example 2 that each loop is one half of the wavelength in each case. When the entire length of the rope is accommodating one loop only, it is called the fundamental frequency and that is the lowest possible frequency for that rope under that particular tension.
The subsequent 2-loop, 3-loop, 4-loop, and ... cases are called the 2nd, 3rd, 4th, and .... harmonics of that fundamental frequency. They are shown on the right.
From the above figure, at resonance, the length L of the string is related to the number of loops or λ/2 as follows:
Example 5: Find the frequency of the 4th harmonic waves on a violin string that is 48.0cm long with a mass of 0.300grams and is under a tension of 4.00N.
Solution: Using the above formula, f4 = (4/0.96m) ∙ [4.00N / (0.000300kg / 0.480m)]1/2 = 333 Hz (Verify)
The Wave Equation:
The one-dimensional wave equation for mechanical waves applied to traveling waves has the following form:
|where v is the speed of waves in the medium such that v = ω / k.|
The solution to this equation is y(x,t) = A sin(kx - ωt + φ) .
Example 6: Show that the equation y(x,t) = A sin(kx - ωt + φ) satisfies the wave equation.
Solution: Take the appropriate partial derivatives and verify by substitution.
Energy Transport on a String:
As a wave travels along a string, it transports energy by being flexed point by point, or dx by dx. By dx, we of course mean differential length. It is easy to calculate the K.E. and P.E. of a differential element as shown.
The conclusion is that the power transmission by a wave on a string is proportional to the squares of angular speed and amplitude and linearly proportional to the wave speed (v) in the string.
Example 7: A 1.00m-long string has a mass of 2.5 grams and is forced to oscillate at 400Hz while under a tensile force of 49N. If the maximum displacement of the string in the direction perpendicular to the waves propagation is 8.00mm, find its average power transmission.
Solution: We need to apply the formula Pavg = 0.5μ(ωA)2v. First μ = M/L, ω = 2πf, A, and v = (F /μ)0.5 must be calculated.
μ = 2.5x10-3 kg/m (Verify), ω = 2512 rd/s (Verify) , v = 140. m/s (Verify), A = 4.00x10-3m , and finally, Pavg = 17.7 watts
Chapter 16 Test Yourself 1:
1) A wave is the motion of (a) a particle along a straight line in a back-and-forth manner. (b) a disturbance in a medium. (c) a disturbance in vacuum. (d) both b & c. click here
2) A mechanical wave (a) can travel in vacuum. (b) requires matter for its transmission. (c) both a & b.
3) An electromagnetic wave (a) can travel in vacuum. (b) can travel in matter. (c) both a & b.
4) A longitudinal wave travels (a) perpendicular to the disturbance direction. (b) parallel to the disturbance direction. (c) in one direction only. click here
5) A transverse wave (a) travels perpendicular to the disturbance direction. (b) can also travel parallel to the disturbance direction. (c) travels sidewise.
6) A stick that gives a downward hit to a horizontally stretched string, generates a trough that travels along the string. When the moving trough reaches a fixed end point, it returns not as a trough, but a hump. The reason is that (a) the moving disturbance is not capable of pulling that end point down. (b) conservation of momentum requires the string to be pulled upward by the fixed point and hence the wave reflects. (c) the conservation of gravitational potential energy must be met. (d) both a & b. click here
7) Wavelength is (a) the distance between two crests. (b) the distance between a crest and the next one. (c) the distance between a trough and the next one. (d) b & c.
8) The frequency of a wave is the number per second of (a) full wavelengths generated by a source. (b) full waves passing by a point. (c) a & b. click here
9) The formula for wave speed, in general, is (a) V = λ / f (b) V = f λ (c) V = f / λ.
10) The power transmission by a wave on a string is proportional to the (a) amplitude. (b) square root of the amplitude. (c) square of the amplitude.
11) The power transmission by a wave on a string is proportional to the (a) frequency. (b) square of frequency. (c) square root of frequency. click here
12) The energy transmission by a wave on a string is proportional to the (a) wave speed. (b) square of wave speed. (c) square root of wave speed.
13) In y(x.t) = A sin ( kx - ωt) for a transverse wave, dy/dt gives us the (a) wave speed. (b) wave acceleration. (c) medium's particles speed.
14) For the resonance of a string fixed at both ends, it is possible to generate (a) all (b) only odd (c) only even multiples of λ/2 in its entire length.
15) If the tension in a stretched string is quadrupled, then a disturbance made in it travels (a) 4 times faster. (b) 1/2 times slower. (c) 2 times faster. click here
16) If the tension in a stretched string is increased by a factor of 9, then a disturbance made in it travels (a) 3 times faster. (b) 1/3 times slower. (c) 9 times faster.
17) The speed of waves in a stretched string is proportional to (a) F, the tension. (b) F1/2 . (c) F2. click here
18) The quantity μ = M/L, mass of a string divided by its length, is called (a) mass per unit length. (b) mass length. (c) length per unit mass.
19) The Metric unit for μ is (a) kg/m. (b) slug/ft. (c) gr/cm.
20) Mechanical waves travel faster in a string that is (a) thicker and therefore less flexible. (b) thinner and therefore more flexible. (c) neither a nor b.
21) The speed of waves in a stretched string is (a) directly proportional to μ. (b) inversely proportional to μ. (c) inversely proportional to 1/μ. (d) directly proportional (1/μ)0. 5. click here
22) The tension in a guitar string is 576N and its 1.00m length has a mass of 0.100gram. The waves speed in this stretched string is (a) 2400m/s. (b) 3600m/s. (c) 1800m/s.
23) If two full wavelengths can be observed in this string, the wavelength of the waves is (a) 1.00m (b) 0.500m. (c) 2.00m.
24) What frequency the guitar string in the previous questions is playing for a wavelength of 0.500m? (a) 1200Hz. (b) 400Hz. (c) 4800Hz. click here
25) The distance between a node and the next antinode on a wave is (a) 1/2 λ. (b) 1/4 λ. (c) 1/8 λ. click here
1) Find the corresponding wavelengths for each of the following radio waves: (a) The AM band ranging from 550kHz to 1600kHz. (b) The FM band ranging from 88MHz to 108MHz. The speed of E&M waves is 3.00x108 m/s.
A snapshot of a traveling wave taken at t = 0.40s is shown on the right. If the wavelength is 8.0cm and the amplitude 2.4cm, and at t = 0, crest C occurred at x = 0, write the equation of the wave.
3) For a tension of 36.0N in a string, waves travel at 42.0m/s, At what tension do waves travel at a speed of 21.0m/s?
4) Use the equation y( x, t ) = A sin ( kx - ωt ), for traveling waves on a string, to find (a) the slope of the string at any position x and time t. (b) How is the maximum slope related to the wave speed and the maximum particle speed?
5) The equation of a traveling harmonic wave is y = 0.06sin( x/5 - t ) where x and y are in (m) and t in (s). Find the (a) wavelength, (b) period, and (c) wave speed. Assume 3 sig. fig. on all numbers.
6) The amplitude of the standing waves on a stretched string is 3.00mm, and the distance between a node and its nearest antinode is 12.5 cm. If the liner mass density of the string is 3.60 grams per meter, and the tension in the string is 9.00N, write the equation of the standing waves in the string.
7) Two strings S1 and S2 that have the same linear mass density are under tensions F1 and F2 such that F1 = 2F2 ; but have different lengths (L1 = 0.333L2). Find the ratio of their fundamental frequencies.
8) A mechanical oscillator imparts 5.00 watts of power at a frequency of 60.0Hz to a thin metal wire of length 16.0m that weighs 0.4905N. For a tension of 48.0N in the wire, find the amplitude of the generated waves. g = 9.81m/s2.
9) The tensile stress in a steel wire is 2.2x108 Pa. The density of steel is 7.8 grams/cm3. Find the speed of transverse waves in the wire.
10) The differential energy of the nth standing wave in a string that is fixed at both ends is dE=μ(ωA)2dx where μ, ω, and A are the linear mass density, angular frequency, and amplitude of the waves, respectively. Calculate (a) the total energy of the wave along the entire length ( from 0 to L ) of the string, and (b) show that the energy per loop of the standing wave is E = 2π2 μ A2 f v. | http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter016.htm | 13 |
84 | The Physics Philes, lesson 20: Here’s Your Sine
In which graphs are drawn, I radiate radians, and periodic functions are defined.
Welp, it looks like I’ve successfully made it through the algebra portion of my review. At least as far as learning calculus is concerned. However, I still have one more topic to review until I can get my teeth into calc: trigonometry.
I don’t have great memories of high school trigonometry. Just a lot of cursing and tears. But don’t worry. I’m marginally more mature now. I think I can handle it after all these years.
So let’s jump right in.
The past couple of weeks we’ve been reviewing functions and how they work. But there is a special type of function called a periodic function. In a periodic function, the values repeat over and over at the same rate and at the same time intervals. In other words, the graph will repeat itself after a fixed period of time. Trigonometric functions are periodic functions. The fact that periodic functions will just go on and on forever is an important one. That means that we have have an infinite number of inputs that have the same value. These are called coterminal angles. These angles have the same function value because the space between them is a multiple of the function’s period.
That’s about as clear as mud, isn’t it? Maybe a sample problem will help clear it up.
Find two angles (one positive and one negative) that have the same sine value as Π/4.
This is really pretty easy. The period value of sine is 2Π. (More on sine and the other trigonometric functions later.) So, to find one positive coterminal angle and one negative coterminal angle, we just have to add and subtract 2Π to Π/4, respectively. Let’s add first:
Π/4 + 2Π = Π/4 + 8Π/4 = 9Π/4
Remember, in order to add fractions, we need a common denominator. In this case, we needed to change 2Π to the equivalent 8Π/4. To find the negative coterminal angle, we just need to subtract 8Π/4:
Π/4 – 2Π = Π/4 – 8Π/4 = -7Π/4
See? It’s that easy. We found a positive and a negative angle that is coterminal to Π/4, and thus have the same sine value.
I mentioned earlier that trigonometric functions are periodic functions. That’s all well and good, but what are trigonometric functions? I’m glad you asked.
There are six trigonometric functions. Three are pretty familiar. The other three are a little more esoteric.
This is the graph for sine. Do you see how, in this illustration, the graph starts at the origin and starts over again at 2Π? That means it’s period is 2Π radians. The graph starts over after it has gone a period of 2Π. The range of sine is -1 ≤ y ≤ 1. That just means that it doesn’t go any higher up the y-axis than 1, and it goes no lower than -1. The domain for sine is unrestricted. The sine function has a value of 0 whenever the input is a multiple of Π. So if the input is Π, 2Π, 3Π, etc., the value of sine is 0.
The graph for cosine is actually really similar to the graph for sine; it’s just shifted Π/2 to the right:
In this graph, the blue is cosine and the red is sine. You can see how similar they are. In fact, cosine is a cofunction of sine. That means that sine and cosine have the same domain, range, and period as sine. Cosine has a value of 0 at all “half-Πs.” (For example, Π/2, 3Π/2, etc.)
Tangent is really just the quotient of sin x and cos x. Cosine is in the denominator, which means that the tangent will be undefined
when cos = 0 (which, as you’ll remember, is on the half-Πs.) Because there are, at times, zeros in the denominator, the tangent graph is riddled with asymptotes. Asymptotes are lines representing an unattainable value that shapes a graph. Vertical asymptotes, which are found in tangent graphs, usually indicate the presence of a zero in the denominator. The tangent graph crosses the x-axis at the midpoint between the asymptotes.
Because the graph is undefined at the half-Πs, the domain of tangent does not include those numbers. Its range, however, is all real numbers and its period is Π.
Now we get into the less well-known trigonometric functions. Cotangent is the cofunction of tangent; it’s the quotient of cos x and sin x. In addition, cotangent is the reciprocal of tangent, which means it can be expressed as 1 / tan x.
Just as with tangent, cotangent includes asymptotes when sine (which is in the denominator) is equal to zero, at which point the graph is undefined. This occurs at multiples of Π, which makes sense when you look at the sine graph. It crosses the x-axis at multiples of Π. The range includes all real numbers, domain includes all real numbers except multiples of Π, and the period is Π.
This trigonometric function is the reciprocal of cosine, so the graph is undefined when cos x = 0. Which means – you guessed it! – this function has vertical asymptotes. Secant is different from the graphs we’ve looked at so far because it has no x-intercepts at all.
The range of secant is y ≤ -1 and y ≥ 1, which means that the closest the graph will get to the x-axis is -1 and 1. The period of secant is 2Π.
Finally, the last of the trigonometric functions. The graph of cosecant looks bit like the graph for secant, just shifted over Π/2.
As you can see, the range of cosecant is the same as the range for secant. The closest it will get to the x-axis is -1 and 1. Cosecant is also the reciprocal of sine, so when sin x = 0, you’ll find a vertical asymptote. The domain of cosecant is all real numbers except for those half-Πs, and the period is 2Π.
That’s a lot of graphs, and they are graphs I’ll need to be familiar with if I’m going to take on calculus. But look, four of these functions are based on the sine and cosine. So as long as I know the sine and cosine in a problem, I can derive the values for the rest. In fact, let’s try it.
If cos Θ = 1/3 an sin Θ = √8/3, evaluate tan Θ and sec Θ.
If you’ll remember, tan x = sin x / cos x and sec x = 1 / cos x. All we have to do is plug in those values.
tan Θ = √8/3 / 1/3 = √8/3(3/1) = √8
sec Θ = 1 / 1/3 = 1/1(3/1) = 3
There, see? Easy. No problem at all.
There is only one more week of review before I dive into the totally unfamiliar world of calculus. That means it’s only one more week until you get to see me flail around and pull my hair out. Should be a good time.
Featured image credit: jonoakley | http://teenskepchick.org/2012/10/29/the-physics-philes-lesson-20-heres-your-sine/ | 13 |
89 | In astronomy, axial precession is a gravity-induced, slow, and continuous change in the orientation of an astronomical body's rotational axis. In particular, it refers to the gradual shift in the orientation of Earth's axis of rotation, which, similar to a wobbling top, traces out a pair of cones joined at their apices in a cycle of approximately 26,000 years (called a Great or Platonic Year in astrology). The term "precession" typically refers only to this largest secular motion; other changes in the alignment of Earth's axis – nutation and polar motion – are much smaller in magnitude.
Earth's precession was historically called the precession of the equinoxes, because the equinoxes moved westward along the ecliptic relative to the fixed stars, opposite to the motion of the Sun along the ecliptic. This term is still used in non-technical discussions, that is, when detailed mathematics are absent. Historically, Hipparchus has been credited with discovering precession of the equinoxes, although evidence from cuneiform tablets suggest that his statements and mathematics relied heavily on Babylonian astronomical materials that had existed for many centuries prior. The exact dates of his life are not known, but astronomical observations attributed to him by Ptolemy date from 147 BC to 127 BC.
With improvements in the ability to calculate the gravitational force between and among planets during the first half of the nineteenth century, it was recognized that the ecliptic itself moved slightly, which was named planetary precession, as early as 1863, while the dominant component was named lunisolar precession. Their combination was named general precession, instead of precession of the equinoxes.
Lunisolar precession is caused by the gravitational forces of the Moon and Sun on Earth's equatorial bulge, causing Earth's axis to move with respect to inertial space. Planetary precession (an advance) is due to the small angle between the gravitational force of the other planets on Earth and its orbital plane (the ecliptic), causing the plane of the ecliptic to shift slightly relative to inertial space. Lunisolar precession is about 500 times greater than planetary precession. In addition to the Moon and Sun, the other planets also cause a small movement of Earth's axis in inertial space, making the contrast in the terms lunisolar versus planetary misleading, so in 2006 the International Astronomical Union recommended that the dominant component be renamed, the precession of the equator, and the minor component be renamed, precession of the ecliptic, but their combination is still named general precession. Many references to the old terms exist in publications predating the change.
Etymologically, precession and procession are terms that relate to motion (derived from the Latin processio, “a marching forward, an advance”). Generally the term procession is used to describe a group of objects moving forward, whereas, the term precession is used to describe a group of objects moving backward. The stars viewed from earth are seen to proceed in a procession from east to west on a daily basis, due to the earth’s diurnal motion, and on a yearly basis, due to the earth’s revolution around the Sun. At the same time the stars can be observed to move slightly retrograde, at the rate of approximately 50 arc seconds per year, a phenomenon known as the “precession of the equinox".
In describing this motion astronomers generally have shortened the term to simply “precession”. And in describing the cause of the motion physicists have also used the term “precession”, which has led to some confusion between the observable phenomenon and its cause, which matters because in astronomy, some precessions are real and others are apparent. This issue is further obfuscated by the fact that many astronomers are physicists or astrophysicists.
It should be noted that the term "precession" used in astronomy generally describes the observable precession of the equinox (the stars moving retrograde across the sky), whereas the term "precession" as used in physics, generally describes a mechanical process.
The precession of the Earth's axis has a number of observable effects. First, the positions of the south and north celestial poles appear to move in circles against the space-fixed backdrop of stars, completing one circuit in 25,772 Julian years (2000 rate). Thus, while today the star Polaris lies approximately at the north celestial pole, this will change over time, and other stars will become the "north star". In approximately 3200 years, the star Gamma Cephei in the Cepheus constellation will succeed Polaris for this position. The south celestial pole currently lacks a bright star to mark its position, but over time precession also will cause bright stars to become south stars. As the celestial poles shift, there is a corresponding gradual shift in the apparent orientation of the whole star field, as viewed from a particular position on Earth.
Secondly, the position of the Earth in its orbit around the Sun at the solstices, equinoxes, or other time defined relative to the seasons, slowly changes. For example, suppose that the Earth's orbital position is marked at the summer solstice, when the Earth's axial tilt is pointing directly toward the Sun. One full orbit later, when the Sun has returned to the same apparent position relative to the background stars, the Earth's axial tilt is not now directly toward the Sun: because of the effects of precession, it is a little way "beyond" this. In other words, the solstice occurred a little earlier in the orbit. Thus, the tropical year, measuring the cycle of seasons (for example, the time from solstice to solstice, or equinox to equinox), is about 20 minutes shorter than the sidereal year, which is measured by the Sun's apparent position relative to the stars. Note that 20 minutes per year is approximately equivalent to one year per 25,772 years, so after one full cycle of 25,772 years the positions of the seasons relative to the orbit are "back where they started". (Other effects also slowly change the shape and orientation of the Earth's orbit, and these, in combination with precession, create various cycles of differing periods; see also Milankovitch cycles. The magnitude of the Earth's tilt, as opposed to merely its orientation, also changes slowly over time, but this effect is not attributed directly to precession.)
For identical reasons, the apparent position of the Sun relative to the backdrop of the stars at some seasonally fixed time, say the vernal equinox, slowly regresses a full 360° through all twelve traditional constellations of the zodiac, at the rate of about 50.3 seconds of arc per year (approximately 360 degrees divided by 25,772), or 1 degree every 71.6 years. This is described as "the age of (a zodiac sign or house)" and historical records became associated with that position of the Sun, and mythology related to the zodiac signs.
Though there is still-controversial evidence that Aristarchus of Samos possessed distinct values for the sidereal and tropical years as early as c. 280 BC, the discovery of precession usually is attributed to Hipparchus (190–120 BC) of Rhodes or Nicaea, a Greek astronomer. According to Ptolemy's Almagest, Hipparchus measured the longitude of Spica and other bright stars. Comparing his measurements with data from his predecessors, Timocharis (320–260 BC) and Aristillus (~280 BC), he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century, in other words, completing a full cycle in no more than 36000 years.
Virtually all of the writings of Hipparchus are lost, including his work on precession. They are mentioned by Ptolemy, who explains precession as the rotation of the celestial sphere around a motionless Earth. It is reasonable to presume that Hipparchus, similarly to Ptolemy, thought of precession in geocentric terms as a motion of the heavens, rather than of the Earth.
The first astronomer known to have continued Hipparchus' work on precession is Ptolemy in the second century. Ptolemy measured the longitudes of Regulus, Spica, and other bright stars with a variation of Hipparchus' lunar method that did not require eclipses. Before sunset, he measured the longitudinal arc separating the Moon from the Sun. Then, after sunset, he measured the arc from the Moon to the star. He used Hipparchus' model to calculate the Sun's longitude, and made corrections for the Moon's motion and its parallax (Evans 1998, pp. 251–255). Ptolemy compared his own observations with those made by Hipparchus, Menelaus of Alexandria, Timocharis, and Agrippa. He found that between Hipparchus' time and his own (about 265 years), the stars had moved 2°40', or 1° in 100 years (36" per year; the rate accepted today is about 50" per year or 1° in 72 years). He also confirmed that precession affected all fixed stars, not just those near the ecliptic, and his cycle had the same period of 36000 years as found by Hipparchus.
Most ancient authors did not mention precession and, perhaps, did not know of it. Besides Ptolemy, the list includes Proclus, who rejected precession, and Theon of Alexandria, a commentator on Ptolemy in the fourth century, who accepted Ptolemy's explanation. Theon also reports an alternate theory:
- According to certain opinions ancient astrologers believe that from a certain epoch the solstitial signs have a motion of 8° in the order of the signs, after which they go back the same amount. . . . (Dreyer 1958, p. 204)
Instead of proceeding through the entire sequence of the zodiac, the equinoxes "trepidated" back and forth over an arc of 8°. The theory of trepidation is presented by Theon as an alternative to precession.
Alternative discovery theories
Various assertions have been made that other cultures discovered precession independently of Hipparchus. According to Al-Battani, the Chaldean astronomers had distinguished the tropical and sidereal year so that by approximately 330 BC, they would have been in a position to describe of precession, if inaccurately, but such claims generally are regarded as unsupported.
Similar claims have been made that precession was known in Ancient Egypt prior to the time of Hipparchus, but these claims remain controversial. Some buildings in the Karnak temple complex, for instance, allegedly were oriented toward the point on the horizon where certain stars rose or set at key times of the year. A few centuries later, when precession made the orientations obsolete, the temples would be rebuilt. The observation that a stellar alignment has grown wrong, however, does not mean that the Egyptians understood that the stars moved across the sky at the specific rate of about one degree per 72 years. Nonetheless, they kept accurate calendars and if they recorded the date of the temple reconstructions it would be a fairly simple matter to plot the rough precession rate. The Dendera Zodiac, a star-map from the Hathor temple at Dendera from a late (Ptolemaic) age, supposedly records precession of the equinoxes (Tompkins 1971). In any case, if the ancient Egyptians knew of precession, their knowledge is not recorded as such in surviving astronomical texts.
Michael Rice wrote in his Egypt's Legacy, "Whether or not the ancients knew of the mechanics of the Precession before its definition by Hipparchos the Bithynian in the second century BC is uncertain, but as dedicated watchers of the night sky they could not fail to be aware of its effects." (p. 128) Rice believes that "the Precession is fundamental to an understanding of what powered the development of Egypt" (p. 10), to the extent that "in a sense Egypt as a nation-state and the king of Egypt as a living god are the products of the realisation by the Egyptians of the astronomical changes effected by the immense apparent movement of the heavenly bodies which the Precession implies." (p. 56) Following Carl Gustav Jung, Rice says that "the evidence that the most refined astronomical observation was practised in Egypt in the third millennium BC (and probably even before that date) is clear from the precision with which the Pyramids at Giza are aligned to the cardinal points, a precision which could only have been achieved by their alignment with the stars. This fact alone makes Jung's belief in the Egyptians' knowledge of the Precession a good deal less speculative than once it seemed." (p. 31) The Egyptians also, says Rice, were "to alter the orientation of a temple when the star on whose position it had originally been set moved its position as a consequence of the Precession, something which seems to have happened several times during the New Kingdom." (p. 170)
The notion that an ancient Egyptian priestly elite tracked the precessional cycle over many thousands of years plays a central role in the theories expounded by Robert Bauval and Graham Hancock in their 1996 book Keeper of Genesis. The authors claim that monumental building projects of the ancient Egyptians functioned as a map of the heavens, and that associated rituals were an elaborate earthly acting-out of celestial events. In particular, the rituals symbolised the "turning back" of the precessional cycle to a remote ancestral time known as Zep Tepi ("first time") which, the authors calculate, dates to around 10,500 BC. The presumption is, that this is when they believed their records began. Evidence from studies of prehistoric Egypt supports a major change at that time in the culture of the residents of Egypt from being hunter-gatherers that led to settled and agriculturally-based economies with centralized societies. The change seems to have been brought about by climate changes and over-grazing.
There has been speculation that the Mesoamerican Long Count calendar is somehow calibrated against the precession, but this view is not held by professional scholars of Mayan civilization. Milbrath states, however, that "a long cycle 30,000 years involving the Pleiades ... may have been an effort to calculate the precession of the equinox."
A twelfth century text by Bhāskara II says: "sampāt revolves negatively 30000 times in a Kalpa of 4320 million years according to Suryasiddhanta, while Munjāla and others say ayana moves forward 199669 in a Kalpa, and one should combine the two, before ascertaining declension, ascensional difference, etc." Lancelot Wilkinson translated the last of these three verses in a too concise manner to convey the full meaning, and skipped the portion combine the two which the modern Hindi commentary has brought to the fore. According to the Hindi commentary, the final value of period of precession should be obtained by combining +199669 revolutions of ayana with −30000 revolutions of sampaat, to get +169669 per Kalpa, i.e. one revolution in 25461 years, which is near the modern value of 25771 years.
Moreover, Munjāla's value gives a period of 21636 years for ayana's motion, which is the modern value of precession when anomalistic precession also is taken into account. The latter has a period of 136000 years now, but Bhāskar-II gives its value at 144000 years (30000 in a Kalpa), calling it sampāt. Bhāskar-II did not give any name of the final term after combining the negative sampāt with the positive ayana. The value he gave indicates, however, that by ayana he meant precession on account of the combined influence of orbital and anomalistic precessions, and by sampāt he meant the anomalistic period, but defined it as equinox. His language is a bit confused, which he clarified in his own Vāsanābhāshya commentary Siddhānta Shiromani, by saying that Suryasiddhanta was not available and he was writing on the basis of hearsay. Bhāskar-II did not give his own opinion, he merely cited Suryasiddhanta, Munjāla, and unnamed "others".
Extant Suryasiddhanta supports the notion of trepidation within a range of ±27° at the rate of 54" per year according to traditional commentators, but Burgess opined that the original meaning must have been of a cyclical motion, for which he quoted the Suryasiddhanta mentioned by Bhāskar II.
Middle Ages and Renaissance
In medieval Islamic astronomy, the Zij-i Ilkhani compiled at the Maragheh observatory set the precession of the equinoxes at 51 arc seconds per annum, which is very close to the modern value of 50.2 arc seconds.
In the Middle Ages, Islamic and Latin Christian astronomers treated "trepidation" as a motion of the fixed stars to be added to precession. This theory is commonly attributed to the Arab astronomer Thabit ibn Qurra, but the attribution has been contested in modern times. Nicolaus Copernicus published a different account of trepidation in De revolutionibus orbium coelestium (1543). This work makes the first definite reference to precession as the result of a motion of the Earth's axis. Copernicus characterized precession as the third motion of the earth.
Over a century later precession was explained in Isaac Newton's Philosophiae Naturalis Principia Mathematica (1687), to be a consequence of gravitation (Evans 1998, p. 246). Newton's original precession equations did not work, however, and were revised considerably by Jean le Rond d'Alembert and subsequent scientists.
Hipparchus gave an account of his discovery in On the Displacement of the Solsticial and Equinoctial Points (described in Almagest III.1 and VII.2). He measured the ecliptic longitude of the star Spica during lunar eclipses and found that it was about 6° west of the autumnal equinox. By comparing his own measurements with those of Timocharis of Alexandria (a contemporary of Euclid, who worked with Aristillus early in the 3rd century BC), he found that Spica's longitude had decreased by about 2° in about 150 years. He also noticed this motion in other stars. He speculated that only the stars near the zodiac shifted over time. Ptolemy called this his "first hypothesis" (Almagest VII.1), but did not report any later hypothesis Hipparchus might have devised. Hipparchus apparently limited his speculations, because he had only a few older observations, which were not very reliable.
Why did Hipparchus need a lunar eclipse to measure the position of a star? The equinoctial points are not marked in the sky, so he needed the Moon as a reference point. Hipparchus already had developed a way to calculate the longitude of the Sun at any moment. A lunar eclipse happens during Full moon, when the Moon is in opposition. At the midpoint of the eclipse, the Moon is precisely 180° from the Sun. Hipparchus is thought to have measured the longitudinal arc separating Spica from the Moon. To this value, he added the calculated longitude of the Sun, plus 180° for the longitude of the Moon. He did the same procedure with Timocharis' data (Evans 1998, p. 251). Observations such as these eclipses, incidentally, are the main source of data about when Hipparchus worked, since other biographical information about him is minimal. The lunar eclipses he observed, for instance, took place on April 21, 146 BC, and March 21, 135 BC (Toomer 1984, p. 135 n. 14).
Hipparchus also studied precession in On the Length of the Year. Two kinds of year are relevant to understanding his work. The tropical year is the length of time that the Sun, as viewed from the Earth, takes to return to the same position along the ecliptic (its path among the stars on the celestial sphere). The sidereal year is the length of time that the Sun takes to return to the same position with respect to the stars of the celestial sphere. Precession causes the stars to change their longitude slightly each year, so the sidereal year is longer than the tropical year. Using observations of the equinoxes and solstices, Hipparchus found that the length of the tropical year was 365+1/4−1/300 days, or 365.24667 days (Evans 1998, p. 209). Comparing this with the length of the sidereal year, he calculated that the rate of precession was not less than 1° in a century. From this information, it is possible to calculate that his value for the sidereal year was 365+1/4+1/144 days (Toomer 1978, p. 218). By giving a minimum rate he may have been allowing for errors in observation.
To approximate his tropical year Hipparchus created his own lunisolar calendar by modifying those of Meton and Callippus in On Intercalary Months and Days (now lost), as described by Ptolemy in the Almagest III.1 (Toomer 1984, p. 139). The Babylonian calendar used a cycle of 235 lunar months in 19 years since 499 BC (with only three exceptions before 380 BC), but it did not use a specified number of days. The Metonic cycle (432 BC) assigned 6,940 days to these 19 years producing an average year of 365+1/4+1/76 or 365.26316 days. The Callippic cycle (330 BC) dropped one day from four Metonic cycles (76 years) for an average year of 365+1/4 or 365.25 days. Hipparchus dropped one more day from four Callipic cycles (304 years), creating the Hipparchic cycle with an average year of 365+1/4−1/304 or 365.24671 days, which was close to his tropical year of 365+1/4−1/300 or 365.24667 days. The three Greek cycles were never used to regulate any civil calendar – they only appear in the Almagest, in an astronomical context.
We find Hipparchus' mathematical signatures in the Antikythera Mechanism, an ancient astronomical computer of the second century BC. The mechanism is based on a solar year, the Metonic Cycle, which is the period the Moon reappears in the same star in the sky with the same phase (full Moon appears at the same position in the sky approximately in 19 years), the Callipic cycle (which is four Metonic cycles and more accurate), the Saros cycle and the Exeligmos cycles (three Saros cycles for the accurate eclipse prediction). The study of the Antikythera Mechanism proves that the ancients have been using very accurate calendars based on all the aspects of solar and lunar motion in the sky. In fact, the Lunar Mechanism which is part of the Antikythera Mechanism depicts the motion of the Moon and its phase, for a given time, using a train of four gears with a pin and slot device which gives a variable lunar velocity that is very close to the second law of Kepler, i.e. it takes into account the fast motion of the Moon at perigee and slower motion at apogee. This discovery proves that Hipparchus mathematics were much more advanced than Ptolemy describes in his books, as it is evident that he developed a good approximation of Kepler΄s second law.
Mithraism was a mystery religion or school based on the worship of the god Mithras. Many underground temples were built in the Roman Empire from about the 1st century BC to the 5th century AD. Understanding Mithraism has been made difficult by the near-total lack of written descriptions or scripture; the teachings must be reconstructed from iconography found in mithraea (a mithraeum was a cave or underground meeting place that often contained bas reliefs of Mithras, the zodiac and associated symbols). Until the 1970s most scholars followed Franz Cumont in identifying Mithras with the Persian god Mithra. Cumont's thesis was re-examined in 1971, and Mithras is now believed to be a syncretic deity only slightly influenced by Persian religion.
Mithraism is recognized as having pronounced astrological elements, but the details are debated. One scholar of Mithraism, David Ulansey, has interpreted Mithras (Mithras Sol Invictus – the unconquerable sun) as a second sun or star that is responsible for precession. He suggests the cult may have been inspired by Hipparchus' discovery of precession. Part of his analysis is based on the tauroctony, an image of Mithras sacrificing a bull, found in most of the temples. According to Ulansey, the tauroctony is a star chart. Mithras is a second sun or hyper-cosmic sun and/or the constellation Perseus, and the bull is Taurus, a constellation of the zodiac. In an earlier astrological age, the vernal equinox had taken place when the Sun was in Taurus. The tauroctony, by this reasoning, commemorated Mithras-Perseus ending the "Age of Taurus" (about 2000 BC based on the Vernal Equinox – or about 11,500 BC based on the Autumnal Equinox).
The iconography also contains two torch bearing boys (Cautes and Cautopates) on each side of the zodiac. Ulansey, and Walter Cruttenden in his book Lost Star of Myth and Time, interpret these to mean ages of growth and decay, or enlightenment and darkness; primal elements of the cosmic progression. Thus Mithraism is thought to have something to do with the changing ages within the precession cycle or Great Year (Plato's term for one complete precession of the equinox).
Changing pole stars
A consequence of the precession is a changing pole star. Currently Polaris is extremely well suited to mark the position of the north celestial pole, as Polaris is a moderately bright star with a visual magnitude of 2.1 (variable), and it is located about one degree from the pole.
On the other hand, Thuban in the constellation Draco, which was the pole star in 3000 BC, is much less conspicuous at magnitude 3.67 (one-fifth as bright as Polaris); today it is invisible in light-polluted urban skies.
The brilliant Vega in the constellation Lyra is often touted as the best north star (it fulfilled that role around 12,000 BC and will do so again around the year 14,000); however, it never comes closer than 5° to the pole.
When Polaris becomes the north star again around 27,800, due to its proper motion it then will be farther away from the pole than it is now, while in 23,600 BC it came closer to the pole.
It is more difficult to find the south celestial pole in the sky at this moment, as that area is a particularly bland portion of the sky, and the nominal south pole star is Sigma Octantis, which with magnitude 5.5 is barely visible to the naked eye even under ideal conditions. That will change from the 80th to the 90th centuries, however, when the south celestial pole travels through the False Cross.
This situation also is seen on a star map. The orientation of the south pole is moving toward the Southern Cross constellation. For the last 2,000 years or so, the Southern Cross has nicely pointed to the south pole. By consequence, the constellation is no longer visible from subtropical northern latitudes, as it was in the time of the ancient Greeks.
Polar shift and equinoxes shift
The images above attempt to explain the relation between the precession of the Earth's axis and the shift in the equinoxes. These images show the position of the Earth's axis on the celestial sphere, a fictitious sphere which places the stars according to their position as seen from Earth, regardless of their actual distance. The first image shows the celestial sphere from the outside, with the constellations in mirror image. The second image shows the perspective of a near-Earth position as seen through a very wide angle lens (from which the apparent distortion arises).
The rotation axis of the Earth describes, over a period of 25,700 years, a small circle (blue) among the stars, centered on the ecliptic north pole (the blue E) and with an angular radius of about 23.4°, an angle known as the obliquity of the ecliptic. The direction of precession is opposite to the daily rotation of the Earth on its axis. The orange axis was the Earth's rotation axis 5,000 years ago, when it pointed to the star Thuban. The yellow axis, pointing to Polaris, marks the axis now.
The equinoxes occur where the celestial equator intersects the ecliptic (red line), that is, where the Earth's axis is perpendicular to the line connecting the centers of the Sun and Earth. (Note that the term "equinox" here refers to a point on the celestial sphere so defined, rather than the moment in time when the Sun is overhead at the Equator, though the two meanings are related.) When the axis precesses from one orientation to another, the equatorial plane of the Earth (indicated by the circular grid around the equator) moves. The celestial equator is just the Earth's equator projected onto the celestial sphere, so it moves as the Earth's equatorial plane moves, and the intersection with the ecliptic moves with it. The positions of the poles and equator on Earth do not change, only the orientation of the Earth against the fixed stars.
As seen from the orange grid, 5,000 years ago, the vernal equinox was close to the star Aldebaran of Taurus. Now, as seen from the yellow grid, it has shifted (indicated by the red arrow) to somewhere in the constellation of Pisces.
Still pictures like these are only first approximations, as they do not take into account the variable speed of the precession, the variable obliquity of the ecliptic, the planetary precession (which is a slow rotation of the ecliptic plane itself, presently around an axis located on the plane, with longitude 174°.8764) and the proper motions of the stars.
The precessional eras of each constellation, often known as Great Months, are approximately:
|Constellation||Year entering||Year exiting|
|Taurus||4500 BC||2000 BC|
|Aries||2000 BC||100 BC|
|Pisces||100 BC||2700 AD|
Axial precession is similar to the precession of a spinning top. In both cases, the applied force is due to gravity. For a spinning top, this force tends to be almost parallel to the rotation axis. For the Earth, however, the applied forces of the Sun and the Moon are nearly perpendicular to the axis of rotation.
The Earth is not a perfect sphere but an oblate spheroid, with an equatorial diameter about 43 kilometers larger than its polar diameter. Because of the Earth's axial tilt, during most of the year the half of this bulge that is closest to the Sun is off-center, either to the north or to the south, and the far half is off-center on the opposite side. The gravitational pull on the closer half is stronger, since gravity decreases with distance, so this creates a small torque on the Earth as the Sun pulls harder on one side of the Earth than the other. The axis of this torque is roughly perpendicular to the axis of the Earth's rotation so the axis of rotation precesses. If the Earth were a perfect sphere, there would be no precession.
This average torque is perpendicular to the direction in which the rotation axis is tilted away from the ecliptic pole, so that it does not change the axial tilt itself. The magnitude of the torque from the Sun (or the Moon) varies with the gravitational object's alignment with the Earth's spin axis and approaches zero when it is orthogonal.
Although the above explanation involved the Sun, the same explanation holds true for any object moving around the Earth, along or close to the ecliptic, notably, the Moon. The combined action of the Sun and the Moon is called the lunisolar precession. In addition to the steady progressive motion (resulting in a full circle in about 25,700 years) the Sun and Moon also cause small periodic variations, due to their changing positions. These oscillations, in both precessional speed and axial tilt, are known as the nutation. The most important term has a period of 18.6 years and an amplitude of less than 20 seconds of arc.
In addition to lunisolar precession, the actions of the other planets of the Solar System cause the whole ecliptic to rotate slowly around an axis which has an ecliptic longitude of about 174° measured on the instantaneous ecliptic. This so-called planetary precession shift amounts to a rotation of the ecliptic plane of 0.47 seconds of arc per year (more than a hundred times smaller than lunisolar precession). The sum of the two precessions is known as the general precession.
The tidal force on Earth due a perturbing body (Sun, Moon or planet) is the result of the inverse-square law of gravity, whereby the gravitational force of the perturbing body on the side of Earth nearest it is greater than the gravitational force on the far side. If the gravitational force of the perturbing body at the center of Earth (equal to the centrifugal force) is subtracted from the gravitational force of the perturbing body everywhere on the surface of Earth, only the tidal force remains. For precession, this tidal force takes the form of two forces which only act on the equatorial bulge outside of a pole-to-pole sphere. This couple can be decomposed into two pairs of components, one pair parallel to Earth's equatorial plane toward and away from the perturbing body which cancel each other, and another pair parallel to Earth's rotational axis, both toward the ecliptic plane. The latter pair of forces creates the following torque vector on Earth's equatorial bulge:
- Gm = standard gravitational parameter of the perturbing body
- r = geocentric distance to the perturbing body
- C = moment of inertia around Earth's axis of rotation
- A = moment of inertia around any equatorial diameter of Earth
- C − A = moment of inertia of Earth's equatorial bulge (C > A)
- δ = declination of the perturbing body (north or south of equator)
- α = right ascension of the perturbing body (east from vernal equinox).
The three unit vectors of the torque at the center of the Earth (top to bottom) are x on a line within the ecliptic plane (the intersection of Earth's equatorial plane with the ecliptic plane) directed toward the vernal equinox, y on a line in the ecliptic plane directed toward the summer solstice (90° east of x), and z on a line directed toward the north pole of the ecliptic.
The value of the three sinusoidal terms in the direction of x (sinδ cosδ sinα) for the Sun is a sine squared waveform varying from zero at the equinoxes (0°, 180°) to 0.36495 at the solstices (90°, 270°). The value in the direction of y (sinδ cosδ (−cosα)) for the Sun is a sine wave varying from zero at the four equinoxes and solstices to ±0.19364 (slightly more than half of the sine squared peak) halfway between each equinox and solstice with peaks slightly skewed toward the equinoxes (43.37°(−), 136.63°(+), 223.37°(−), 316.63°(+)). Both solar waveforms have about the same peak-to-peak amplitude and the same period, half of a revolution or half of a year. The value in the direction of z is zero.
The average torque of the sine wave in the direction of y is zero for the Sun or Moon, so this component of the torque does not affect precession. The average torque of the sine squared waveform in the direction of x for the Sun or Moon is:
- = semimajor axis of Earth's (Sun's) orbit or Moon's orbit
- e = eccentricity of Earth's (Sun's) orbit or Moon's orbit
and 1/2 accounts for the average of the sine squared waveform, accounts for the average distance cubed of the Sun or Moon from Earth over the entire elliptical orbit, and (the angle between the equatorial plane and the ecliptic plane) is the maximum value of δ for the Sun and the average maximum value for the Moon over an entire 18.6 year cycle.
whereas that due to the Moon is:
where i is the angle between the plane of the Moon's orbit and the ecliptic plane. In these two equations, the Sun's parameters are within square brackets labeled S, the Moon's parameters are within square brackets labeled L, and the Earth's parameters are within square brackets labeled E. The term accounts for the inclination of the Moon's orbit relative to the ecliptic. The term (C−A)/C is Earth's dynamical ellipticity or flattening, which is adjusted to the observed precession because Earth's internal structure is not known with sufficient detail. If Earth were homogeneous the term would equal its third eccentricity squared,
where a is the equatorial radius (6378137 m) and c is the polar radius (6356752 m), so e''2 = 0.003358481.
|Gm = 1.3271244×1020 m3/s2||Gm = 4.902799×1012 m3/s2||(C − A)/C = 0.003273763|
|a = 1.4959802×1011 m||a = 3.833978×108 m||ω = 7.292115×10−5 rad/s|
|e = 0.016708634||e = 0.05554553||= 23.43928°|
- dψS/dt = 2.450183×10−12 /s
- dψL/dt = 5.334529×10−12 /s
- dψS/dt = 15.948788"/a vs 15.948870"/a from Williams
- dψL/dt = 34.723638"/a vs 34.457698"/a from Williams.
The solar equation is a good representation of precession due the Sun because Earth's orbit is close to an ellipse, being only slightly perturbed by the other planets. The lunar equation is not as good a representation of precession due to the Moon because its orbit is greatly distorted by the Sun.
Simon Newcomb's calculation at the end of the 19th century for general precession (p) in longitude gave a value of 5,025.64 arcseconds per tropical century, and was the generally accepted value until artificial satellites delivered more accurate observations and electronic computers allowed more elaborate models to be calculated. Lieske developed an updated theory in 1976, where p equals 5,029.0966 arcseconds per Julian century. Modern techniques such as VLBI and LLR allowed further refinements, and the International Astronomical Union adopted a new constant value in 2000, and new computation methods and polynomial expressions in 2003 and 2006; the accumulated precession is:
- pA = 5,028.796195×T + 1.1054348×T2 + higher order terms,
in arcseconds, with T, the time in Julian centuries (that is, 36,525 days) since the epoch of 2000.
The rate of precession is the derivative of that:
- p = 5,028.796195 + 2.2108696×T + higher order terms.
The constant term of this speed (5,028.796195 arcseconds per century in above equation) corresponds to one full precession circle in 25,771.57534 years (one full circle of 360 degrees divided with 5,028.796195 arcseconds per century ), although some other sources put the value at 25771.4 years, leaving a small uncertainty.
The precession rate is not a constant, but is (at the moment) slowly increasing over time, as indicated by the linear (and higher order) terms in T. In any case it must be stressed that this formula is only valid over a limited time period. It is clear that if T gets large enough (far in the future or far in the past), the T² term will dominate and p will go to very large values. In reality, more elaborate calculations on the numerical model of the Solar System show that the precessional constants have a period of about 41,000 years, the same as the obliquity of the ecliptic. Note that the constants mentioned here are the linear and all higher terms of the formula above, not the precession itself. That is,
- p = A + BT + CT2 + …
is an approximation of
- p = a + b sin (2πT/P), where P is the 410-century period.
Theoretical models may calculate the proper constants (coefficients) corresponding to the higher powers of T, but since it is impossible for a (finite) polynomial to match a periodic function over all numbers, the error in all such approximations will grow without bound as T increases. In that respect, the International Astronomical Union chose the best-developed available theory. For up to a few centuries in the past and the future, all formulas do not diverge very much. For up to a few thousand years in the past and the future, most agree to some accuracy. For eras farther out, discrepancies become too large – the exact rate and period of precession may not be computed using these polynomials even for a single whole precession period.
The precession of Earth's axis is a very slow effect, but at the level of accuracy at which astronomers work, it does need to be taken into account on a daily basis. Note that although the precession and the tilt of Earth's axis (the obliquity of the ecliptic) are calculated from the same theory and thus, are related to each other, the two movements act independently of each other, moving in mutually perpendicular directions.
Precession exhibits a secular decrease due to tidal dissipation from 59"/a to 45"/a (a = annum = Julian year) during the 500 million year period centered on the present. After short-term fluctuations (tens of thousands of years) are averaged out, the long-term trend can be approximated by the following polynomials for negative and positive time from the present in "/a, where T is in milliards of Julian years (Ga):
- p− = 50.475838 − 26.368583T + 21.890862T2
- p+ = 50.475838 − 27.000654T + 15.603265T2
Precession will be greater than p+ by the small amount of +0.135052"/a between +30 Ma and +130 Ma. The jump to this excess over p+ will occur in only 20 Ma beginning now because the secular decrease in precession is beginning to cross a resonance in Earth's orbit caused by the other planets.
According to Ward, when, in about 1,500 million years, the distance of the Moon, which is continuously increasing from tidal effects, has increased from the current 60.3 to approximately 66.5 Earth radii, resonances from planetary effects will push precession to 49,000 years at first, and then, when the Moon reaches 68 Earth radii in about 2,000 million years, to 69,000 years. This will be associated with wild swings in the obliquity of the ecliptic as well. Ward, however, used the abnormally large modern value for tidal dissipation. Using the 620-million year average provided by tidal rhythmites of about half the modern value, these resonances will not be reached until about 3,000 and 4,000 million years, respectively. However, due to the gradually increasing luminosity of the Sun, the oceans of the Earth will have vaporized long before that time (about 2,100 million years from now).
- Hohenkerk, C.Y., Yallop, B.D., Smith, C.A., & Sinclair, A.T. "Celestial Reference Systems" in Seidelmann, P.K. (ed.) Explanatory Supplement to the Astronomical Almanac. Sausalito: University Science Books. p. 99.
- Astro 101 – Precession of the Equinox, Western Washington University Planetarium, accessed 30 December 2008
- Robert Main, Practical and Spherical Astronomy (Cambridge: 1863) pp.203–4.
- James G. Williams, "Contributions to the Earth's obliquity rate, precession, and nutation", Astronomical Journal 108 (1994) 711–724, pp.712&716. All equations are from Williams.
- IAU 2006 Resolution B1: Adoption of the P03 Precession Theory and Definition of the Ecliptic
- Dennis Rawlins, Continued fraction decipherment: the Aristarchan ancestry of Hipparchos' yearlength & precession DIO (1999) 30–42.
- Neugebauer, O. "The Alleged Babylonian Discovery of the Precession of the Equinoxes", Journal of the American Oriental Society, Vol. 70, No. 1. (Jan. – Mar., 1950), pp. 1–8.
- Midant-Reynes, Béatrix. The Prehistory of Egypt: From the First Egyptians to the First Kings. Oxford: Blackwell Publishers.
- Milbrath, S. "Just How Precise is Maya Astronomy?", Institute of Maya Studies newsletter, December 2007.[dead link]
- Siddhānta-shiromani, Golādhyāya, section-VI, verses 17–19
- Translation of the Surya Siddhānta by Pundit Bāpu Deva Sāstri and of the Siddhānta Siromani by the Late Lancelot Wilkinson revised by Pundit Bāpu Deva Sāstri, printed by C B Lewis at Baptist Mission Press, Calcutta, 1861; Siddhānta Shiromani Hindi commentary by Pt Satyadeva Sharmā, Chowkhambā Surbhārati Prakāshan, Varanasi, India.
- Vāsanābhāshya commentary Siddhānta Shiromani (published by Chowkhamba)
- cf. Suryasiddhanta, commentary by E. Burgess, ch.iii, verses 9-12.
- Rufus, W. C. (May 1939). "The Influence of Islamic Astronomy in Europe and the Far East". Popular Astronomy 47 (5): 233–238 . Bibcode:1939PA.....47..233R..
- van Leeuwen, F. (2007). "HIP 11767". Hipparcos, the New Reduction. Retrieved 2011-03-01.
- Kaler, James B. (Reprint 2002). The ever-changing sky: a guide to the celestial sphere. Cambridge University Press. p. 152. ISBN 978-0521499187.
- The Columbia Electronic Encyclopedia, 6th ed., 2007
- Ivan I. Mueller, Spherical and practical astronomy as applied to geodesy (New York: Frederick Unger, 1969) 59.
- G. Boué & J. Laskar, "Precession of a planet with a satellite", Icarus 185 (2006) 312–330, p.329.
- George Biddel Airy, Mathematical tracts on the lunar and planetary theories, the figure of the earth, precession and nutation, the calculus of variations, and the undulatory theory of optics (third edititon, 1842) 200.
- J.L. Simon et al., "Numerical expressions for precession formulae and mean elements for the Moon and the planets", Astronomy and Astrophyics 282 (1994) 663–683.
- Dennis D. McCarthy, IERS Technical Note 13 – IERS Standards (1992) (Postscript, use PS2PDF).
- N. Capitaine et al. (2003), "Expressions for IAU 2000 precession quantities" 685KB, Astronomy & Astrophysics 412, 567–586.
- J. Laskar et al., "A long-term numerical solution for the insolation quantities of the Earth", Astronomy and Astrophysics 428 (2004) 261–285, pp.276 & 278.
- Explanatory supplement to the Astronomical ephemeris and the American ephemeris and nautical almanac
- Precession and the Obliquity of the Ecliptic has a comparison of values predicted by different theories
- A.L. Berger (1976), "Obliquity & precession for the last 5 million years", Astronomy & astrophysics 51, 127
- J.H. Lieske et al. (1977), "Expressions for the Precession Quantities Based upon the IAU (1976) System of Astronomical Constants". Astronomy & Astrophysics 58, 1..16
- W.R. Ward (1982), "Comments on the long-term stability of the earth's obliquity", Icarus 50, 444
- J.L. Simon et al. (1994), "Numerical expressions for precession formulae and mean elements for the Moon and the planets", Astronomy & Astrophysics 282, 663..683
- J.L. Hilton et al. (2006), "Report of the International Astronomical Union Division I Working Group on Precession and the Ecliptic" (pdf, 174KB). Celestial Mechanics and Dynamical Astronomy (2006) 94: 351..367
- Rice, Michael (1997), Egypt's Legacy: The archetypes of Western civilization, 3000–30 BC, London and New York.
- Dreyer, J. L. E.. A History of Astronomy from Thales to Kepler. 2nd ed. New York: Dover, 1953.
- Evans, James. The History and Practice of Ancient Astronomy. New York: Oxford University Press, 1998.
- Pannekoek, A. A History of Astronomy. New York: Dover, 1961.
- Parker, Richard A. "Egyptian Astronomy, Astrology, and Calendrical Reckoning." Dictionary of Scientific Biography 15:706–727.
- Tomkins, Peter. Secrets of the Great Pyramid. With an appendix by Livio Catullo Stecchini. New York: Harper Colophon Books, 1971.
- Toomer, G. J. "Hipparchus." Dictionary of Scientific Biography. Vol. 15:207–224. New York: Charles Scribner's Sons, 1978.
- Toomer, G. J. Ptolemy's Almagest. London: Duckworth, 1984.
- Ulansey, David. The Origins of the Mithraic Mysteries: Cosmology and Salvation in the Ancient World. New York: Oxford University Press, 1989.
- Schütz, Michael: Hipparch und die Entdeckung der Präzession. Bemerkungen zu David Ulansey, Die Ursprünge des Mithraskultes, in: ejms = Electronic Journal of Mithraic Studies, www.uhu.es/ejms/Papers/Volume1Papers/ulansey.doc
- D'Alembert and Euler's Debate on the Solution of the Precession of the Equinoxes
- Bowley, Roger; Merrifield, Michael. "Axial Precession". Sixty Symbols. Brady Haran for the University of Nottingham. | http://en.wikipedia.org/wiki/Axial_precession | 13 |
57 | Geometry is the branch of mathematics that that deals with points, lines, planes, and solid bodies and examines their properties, measurements and mutual relationships in spaces.
Shapes are said to have dimension. One dimensional shapes are called lines and two dimensional shapes are flat shapes like circles and squares. Three dimensional shapes are shapes with depth to them like cubes and spheres.
Polygons are a type of two dimensional shapes. Polygons are closed figures with straight edges of three or more edges or sides. Examples of polygons are triangles, squares, pentagons, and octagons. Triangles have 3 sides, squares have 4, pentagons have 5 and octagons have 8. A figure or shape is not considered a polygon if it has rounded sides or if its sides intersects at any place other than at the ends of each side. Therefore a circle is not a polygon and an hourglass shape is not a polygon. Instead, it is two polygons, two triangles. Sometimes though a circle is considered to be a polygon with an infinite number of sides.
A regular polygon is a polygon where all of the sides and angles are the same. Therefore, a square is a regular polygon with four sides and a rectangle is not. The sum of the angles in a polygon of n sides is 180º x (n-2).
The triangle, a type of polygon has 3 special types. An equilateral triangle is a regular polygon. It has all the same sides and the same angles. An isosceles triangle has two equal sides and two equal angles. A scalene triangle has all different sides and angles.
Another type of triangle which can be either isosceles or scalene is a right triangle. A right triangle is a triangle where one of the angles is 90º. A right triangle cannot be an equilateral triangle because all of the angles in an equilateral triangle are 60º. There are also two other designations of triangles, acute and obtuse. Acute triangles are triangles where all three angles are less than 90º. An equilateral triangle is an acute triangle. Obtuse triangles contain one angle that is larger than 90º.
Quadrilaterals are shapes with four sides and internal angles adding up to 360º. There are several special types of quadrilaterals. Quadrilaterals with all of their angles equal to 90º are called rectangles. Rectangles with all of their sides equal are called squares. Parallelograms are a kind of quadrilateral with opposite sides equal in length and parallel. Rectangles are considered parallelograms. A rhombus is a parallelogram with all of its sides equal in length. Squares can be considered rhombuses. Trapezoids contain one pair of parallel sides of unequal length.
Shapes have certain measurable properties. Aside from angles and side length which have already been discussed, there is also area and perimeter. Area is the grayed portion in each of the figures above. The perimeter is the length of all of the sides added. The perimeter of a circle is also called the circumference. The circumference of a circle is equal to p times the diameter of the circle. The equations for the area for the main types of shapes can be seen below.
Three dimensional shapes have measurable properties as well. Instead of area and perimeter, there is volume and surface area. For shapes with no rounded surfaces, only flat sides, the surface area is the sum of each face of the shape. The surface area of rounded shapes is slightly more complicated. The surface areas of a sphere, a cone, and a cylinder can be found in the following table. | http://www.nde-ed.org/EducationResources/Math/Math-Geometry.htm | 13 |
58 | V. Theory of Distribution of Variables in the Sea [PDF]
It cannot be too strongly emphasized that the ocean is three-dimensional and that the distribution of properties or the type of motion must be represented in space. For this purpose a convenient system of coordinates is needed. Any point in the ocean can be designated by means of its geographic latitude and longitude and its depth below sea level, but if one deals with a small area one may consider the surface of the earth within that area as flat, and can introduce ordinary rectangular coordinates with the horizontal axes at sea level and the vertical axis positive downward. By “sea level” is meant not the actual sea level but an ideal sea level, which is defined as a surface along which no component of gravity acts. The difference between the actual and the ideal sea level will be further explained when dealing with the distribution of pressure (p. 406).
The location in the ocean space of any given surface is completely determined if in every latitude and longitude one knows the depth of the surface below the ideal sea level. In a chart this surface can be represented by means of lines of equal depth below sea level (isobaths), which together render it picture of the topography of the surface. Thus, the topography of the sea bottom is shown by isobaths drawn at selected intervals of depth.
The quantities that must be considered when dealing with the sea are either scalars or vectors. A scalar quantity is a physical quantity whose measure is completely described by a number, that depends on the selected system of units. Pressure, temperature, salinity, density, and oxygen content can be mentioned as examples of scalar quantities. A vector is a physical quantity that is completely described by magnitude and direction. The velocity of a particle, the acceleration of a particle, and the forces acting on a particle are examples of vectors.
The magnitude of a vector, such as the numerical value of the velocity of a particle, is a scalar quantity. A vector can be represented by means of its components along the axes of a coordinate system, and these components are scalar quantities.
A continuous fluid is characterized, at every point in the space which it occupies, by a number of different properties. The space distribution of one particular property is called the field of that property. The field is called a scalar field if the property is a scalar quantity, and a vector field if the quantity is a vector. In the ocean there are scalar fields such as the pressure field, the temperature field, and the density field, and there are vector fields such as the velocity field, the acceleration field, and so on.
The term field was first applied to a vector field in order to describe the distribution of electromagnetic forces. Every student of physics has seen the magnetic field of force demonstrated by means of iron filings placed on a card above a magnet, but this experiment brings out only certain characteristics of the field. It shows the direction of the magnetic forces in one single plane, but it does not show the space distribution or the magnitude of the force of the field.
A scalar field is completely represented by means of equiscalar surfaces—that is, surfaces along which the scalar quantity has the same numerical value. The temperature field in the ocean, for instance, would be completely described if one knew exactly the form of the isothermal surfaces, and, similarly, the pressure field would be fully represented if one knew the form of the isobaric surfaces. However, it is impracticable to prepare space models that show the actual configuration of isothermal surfaces or other equiscalar surfaces in the ocean, and it would be impossible to publish such representations. For practical purposes one must select other forms of representation. One widely used method is to show the lines of intersection between equiscalar surfaces and the coordinate surfaces. A chart showing the distribution of temperature at sea level is an example of such representation. In this case the sea level represents one of the principal coordinate surfaces, and the isotherms represent the lines at which the surfaces of equal temperature in the sea intersect the sea surface. Similarly, a chart showing the distribution of temperature at a depth of 1000 m shows the lines along which the isothermal surfaces intersect the 1000-m surface, whereas the temperature distribution in a vertical section shows the lines along which the isothermal surfaces intersect the vertical plane that is under consideration.
A series of horizontal charts of isotherms in surfaces at different distances below sea level give a representation of the temperature field in the ocean, and a series of vertical sections showing isotherms give another representation of the same field.
On the other hand, one can make use of an entirely different method of representation. Instead of showing the lines along which the isothermal surfaces intersect a coordinate surface, one can represent the isothermal surface itself and can show the lines along which the coordinate surfaces at different distances below sea level intersect that surface. Such a
These topographic charts would represent charts of absolute topography, because it is assumed that the depths below the ideal sea level are known. This ideal sea level, however, is a fictitious level that cannot be determined by observations, and all measurements have to be made from the actual sea level. In practice, therefore, the topography of a surface in the ocean will not represent the absolute topography but a relative topography referred to the unknown shape of the actual sea surface. In many instances one need not take the difference between absolute and relative topography into account, because it generally amounts to less than 1 m. For instance, it can be neglected when one deals with isothermal surfaces, because the change in temperature on a fraction of a meter is generally negligible. When dealing with the isobaric surfaces, on the other hand, as will be explained in detail when discussing the field of pressure, one must discriminate sharply between absolute and relative topographies.
These matters have been set forth explicitly, because it is essential to bear in mind that one must always consider distribution in space, which can be fully described by means of equiscalar surfaces. These, however, may have highly complicated forms.
The mathematical definition of an equiscalar surface can be written
In a vertical section in the x-z plane the equiscalar curves are similarly defined by
So far, the discussion has dealt with equiscalar surfaces in general. In practice, one may select these surfaces so that there is a constant difference between the value of the variable at any two surfaces. These surfaces are called standard equiscalar surfaces. In the case of temperature, the isothermal surfaces might be selected for every one degree of temperature; in the case of salinity, the isohaline surfaces might be selected for every 0.1 ‰, and so on. These surfaces would divide the space into thin layers characterized by a constant difference of the quantity at the two boundary surfaces of every layer. Such layers are called equiscalar sheets. It should be noted that the scalar is not constant within this sheet but has a constant average value. It is evident that the thickness of these sheets represents the rate at which the scalar varies in a direction at right angles to the equiscalar surfaces. Where the sheets are thin the variation is great, but where the sheets are thick the variation is small. The rate of variation can be represented by means of a vector whose direction is normal to the equiscalar surface and whose magnitude is inversely proportional to the thickness of the sheet. The vector representing the rate of decrease is generally called the gradient (temperature gradient, pressure gradient), and the vector representing the rate of increase is called the ascendant. If the scalar is called s, then the gradient, G, and the ascendant, A, are defined by the equations
If the field is represented by means of a sufficient number of surfaces, these surfaces will completely define the gradients and ascendants that are characteristic of the distribution. Thus, the special vector fields of gradients and ascendants are entirely described by means of systems of equiscalar surfaces, but other vector fields cannot be described in that manner. Vector fields will be dealt with in chapter XII.
Relation between the Distribution of Properties and the Currents in the Sea
Consider any scalar quantity, s (temperature, salinity, pressure, oxygen content, and so on), the distribution of which is continuous in space and time, so that it can be represented as a function of time and the three space coordinates, s = f(t,x,y,z). Let us assume that this scalar quantity can be considered a property of the individual particles of the fluid. A particle in motion after a time dt will be in a new locality,
Dividing by dt and considering that dx/dt, dy/dt and dz/dt represent the components of the velocity, one obtains
A few important points can be brought out by means of the above equation: (1) the distribution of any scalar quantity is stationary—that is, independent of time if the local change is zero (∂s/∂ t = 0); (2) the advection terms disappear if there is no motion or if the field is uniform—that is, if either vx = vy = vz = 0 or ∂ s/∂ x = ∂ s/∂ y = ∂ s/∂ z = 0; (3) when the individual change is zero (ds/dt = 0), the local change is equal to the advection but is of opposite sign; (4) if the field of a property is stationary (∂ s/∂ t = 0) and if, further, the individual time change is zero (ds/dt = 0), equation (V, 4) is reduced toequation (V, l), or by examination of the two-dimensional case.
Distribution of Conservative Concentrations in the Sea
The discussion has so far been of a purely formalistic nature. If one goes a step further and considers the processes that maintain or tend
The processes that tend to modify the concentrations can be divided into two groups: external processes, which are active only at the boundary surfaces of the fluid, and internal processes, which are active anywhere in the fluid. The external processes are of importance in determining the concentrations at the boundaries, and the internal processes, together with the boundary values, determine the distribution throughout the fluid.
By conservative concentrations are meant concentrations that are altered locally, except at the boundaries, by processes of diffusion and advection only. Heat content and salinity are two outstanding examples of conservative concentrations. Consider a cube the surfaces of which are of unit area and are normal to the coordinate axes. Through the two surfaces that are normal to the x axis, diffusion leads to a transport in unit time of (Ax/ρ)1(∂ s/∂ x)1 and (Ax/ρ)2(∂ s/∂ x)2, respectively, where both the coefficient, Ax, and the derivative, ∂ s/∂ x, may vary in the x direction. The coefficient of diffusion enters here in the “kinematic” form (p. 470) as A / ρ, where A is the eddy diffusivity, because concentrations have been defined as amounts per unit volume. The difference per unit length of these transports, ∂/∂ x[(Ax/ρ)(∂ s/∂ x)], represents the net change of concentration in the unit volume due to diffusion. In the presence of a current in the x direction, there will also be a net change of concentration due to advection. The concentration that a current of velocity, vx, transports through a unit surface in unit time is equal to svx, and, if this transport changes in the direction of flow, the concentration per unit volume is altered by —∂(svx)/∂ x. Similar considerations are applicable to transport through the other surfaces of the cube, and the combined local change of concentration, therefore, is the sum of terms representing diffusion and advection:
Taking equation (V, 4) into consideration, one obtains
In practice these equations must be greatly simplified. Consider, for example, a two-dimensional system in which the velocity is directed along the x axis, in which diffusion in the x direction can be neglected, and in which it can be assumed that the coefficient of vertical diffusion, A / ρ, is constant. For such a system the condition for a stationary distribution of s, (∂ s/∂ t = 0), is reduced toDefant (1929) and Thorade (1931) for studying the character of stationary distributions and by Defant (1936) for computing the ratio A / vx from observed distributions.
As another example, consider a uniform field for which ∂ s/∂ t=ds/dt and assume that Ax = Ay = 0. The above equations are then reduced to
Distribution of Nonconservative Concentrations
By nonconservative concentrations are meant primarily concentrations whose distributions are influenced by biological processes besides
The local time change of concentration due to biological processes will be called R. Adding this quantity at the right-hand side of equation (V, 5) one can stateequations (V, 5) and (V, 6) (Seiwell, 1937, Sverdrup and Fleming, 1941).
The Principle of Dynamic Equilibrium
Experience shows that in a large body of water comparable, say, to the body of water in the Mediterranean Sea, the average conditions do not change from one year to another. The average distribution of temperature remains unaltered year after year, and the same is true as to the average salinity, oxygen content, and contents of minor constituents. If time intervals longer than a year are considered, say ten-year periods, it is probable that even the average number of different species of organisms remains unaltered, provided that the nonaquatic animal, man, does not upset conditions by exterminating certain species and depleting the stock of others. These unchanging conditions represent a state of delicate dynamic equilibrium between factors that always tend to alter the picture in different directions.
In dealing with conservative concentrations, diffusion and advection are at balance except at the sea surface, where external processes contribute toward maintaining the concentration at a certain level. This was illustrated when discussing the general distribution of surface salinity (p. 125), which was shown to depend on two terms, one that represents the external processes of evaporation and precipitation, and one that represents the internal processes of diffusion and advection. Similarly, the surface temperature depends upon heating and cooling by processes of radiation and by exchange with the atmosphere and upon conduction and advection of heat.
In a study of the subsurface distribution of temperature and salinity, it is not necessary to know the processes that maintain the surface values, but it is sufficient to determine these values empirically. If this could be done and if the processes of diffusion and the currents were known, the general distribution of temperature and salinity could be computed. Conversely, if these distributions were known, information as to diffusion and currents could be obtained. In oceanography only the latter method of approach has been employed.
When nonconservative concentrations are dealt with, the principle of a dynamic equilibrium implies that the effects of diffusion, advection, and biological processes cancel. Of the nonconservative concentrations, only the dissolved gases are greatly influenced by the contact with the atmosphere, and other nonconservative concentrations are practically unaltered by external processes.
Application of the principle of dynamic equilibrium can be illustrated by considering the distribution of oxygen. Below the euphotic zone, biological processes that influence the oxygen content always lead to a consumption of oxygen, and the processes of diffusion and advection therefore must lead to a replenishment that exactly balances the consumption. No further conclusions can be drawn. This obvious consideration has been overlooked, however, and some authors have interpreted a layer of minimum oxygen content as a layer of minimum replenishment (Wüst, 1935), while others have considered it a layer of maximum consumption (Wattenberg, 1938).
Conclusions as to the rapidity of consumption (and replenishment) could be drawn from the known distribution of oxygen only if the consumption depended upon the absolute content of oxygen, but the consumption appears to be independent of the oxygen content until this has been reduced to nearly nil (ZoBell, 1940). When all oxygen has been removed, consumption and replenishment must both be zero, and even this obvious conclusion should not be overlooked.
In certain instances a relation may exist between the oxygen distribution and the character of the current. Assume that a nearly horizontal internal boundary exists which separates currents flowing in opposite directions, that diffusion takes place in a vertical direction only, and that the coefficient of diffusion is independent of z. When dynamic equilibrium exists, equation (V, 9) is then reduced to
Similar reasoning is valid when dealing with compounds that are removed from the water by organisms for building up their tissues and are returned to solution as metabolic products or by decomposition of organic tissues. A balance is maintained, but in many cases it is not correct to speak of “replenishment” by advection and diffusion, as in the case of oxygen, because the biological processes may lead to a net replenishment, in which case the physical processes must take care of a corresponding removal. Thus, in the deeper layers phosphates and nitrates are added to the water by decomposition of organic matter and removed by diffusion and advection.
When dealing with populations, similar considerations enter. It must be emphasized especially that the number of organisms present in unit volume of water gives no information as to the processes that operate toward changing the number. A small population of diatoms, say, may divide very rapidly without increasing in number, owing to the presence of grazers that consume diatoms. On the other hand, a large population of diatoms may not indicate a rapid production of organic matter, because further growth may be impossible owing to lack of nutrient salts in the water. The terms “population” and “production” have to be clearly defined and kept separate. (“Population” represents concentration, whereas “production” represents one of the processes that alter the concentration.
Another warning appears to be appropriate—namely, a warning against confusion between individual and local changes (p. 157). From the fact that a local population remains unaltered, it cannot be concluded that the population within the water which passes the locality of observation also remains constant—that is, that the individual time change is zero. Similarly, if a sudden change in population is observed in a given locality, it cannot be concluded that the processes which have been active in that locality have led to a rapid growth, because it is equally possible that a new water mass of other characteristics is passing the locality.
If the external influences were clear, if processes of diffusion and advection were known, and if biological and organic chemical processes were fully understood, the distribution of all concentrations could be accounted for. It would then be possible not only to explain the average distribution but also to account for all periodic and apparently random changes. This is the distant goal, but when working toward it one must be fully aware of the limitations of the different methods of approach.
Thus, complete description of the oxygen distribution below the euphotic zone is theoretically possible if the oxygen content in the surface
The dynamic equilibrium, the importance of which has been stressed, exists only insofar as average conditions within a large body of water and over a considerable length of time are concerned. During any part of the day or year the external or internal processes may be subject to periodic or random variation such that at a given moment no equilibrium exists (∂ s/∂ t ≢ 0). At the surface, heating periodically exceeds cooling, and cooling periodically exceeds heating, as a result of which the surface temperature is subjected to diurnal and annual variations that by processes of conduction are transmitted to greater depths. It is possible that longer periods exist which are related to periodic changes in the energy received from the sun, but these long-period variations are of small amplitudes. In many areas, shifts of currents lead to local changes of the temperature which are periodic in character if the shifts are associated with the seasons, or nonperiodic if they are related to apparently random events. In the discussion of the annual variation of temperature (p. 131) the effect of these different processes was illustrated. Similar reasoning is applicable to periodic and random variations of salinity and also to variations of nonconservative properties.
From what has been stated it is evident that in the discussion of the distribution of concentrations in the sea it is as yet impossible to apply a method of deduction based on knowledge of all processes involved in maintaining the distribution. Instead, one has to follow a winding course, discuss processes and their effects whenever possible, discuss actual distributions if such have been determined, and either interpret these distributions by means of knowledge gained from other sources as to acting processes or draw conclusions as to these processes from the distribution. In some instances the processes that maintain the boundary values can be dealt with at considerable length, but otherwise the observed boundary values have to be accepted without attempts at explanation. In all cases, however, it is essential to bear in mind that one is dealing with concentrations in a continuous medium and that general considerations as set forth here are always applicable. | http://publishing.cdlib.org/ucpressebooks/view?docId=kt167nb66r&doc.view=content&chunk.id=ch05&toc.depth=1&anchor.id=0&brand=eschol | 13 |
61 | Complementary and supplementary angles Basics of complementary, supplementary, adjacent and straight angles. Also touching on what it means to be perpendicular
Complementary and supplementary angles
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Let's say I have an angle ABC, and it looks somethings like this, so its vertex is going to be at 'B',
- Maybe 'A' sits right over here, and 'C' sits right over there.
- And then also let's say we have another angle called DAB, actually let me call it DBA,
- I want to have the vertex once again at 'B'.
- So let's say it looks like this, so this right over here is our point 'D'.
- And let's say we know the measure of angle DBA, let's say we know that that's equal to 40 degrees.
- So this angle right over here, its measure is equal to 40 degrees,
- And let's say we know that the measure of angle ABC is equal to 50 degrees.
- Right, so there's a bunch of interesting things happening over here,
- the first interesting thing that you might realize is that both of these angles
- share a side, if you view these as rays, they could be lines,
- line segments or rays, but if you view them as rays,
- then they both share the ray BA, and when you have two angles
- like this that share the same side, these are called adjacent angles
- because the word adjacent literally means 'next to'.
- Adjacent, these are adjacent angles.
- Now there's something else you might notice that's interesting here,
- we know that the measure of angle DBA is 40 degreees
- and the measure of angle ABC is 50 degrees
- and you might be able to guess what the measure of angle DBC is,
- the measure of angle DBC, if we drew a protractor over here
- I'm not going to draw it, it will make my drawing all messy,
- but if we, well I'll draw it really fast,
- So, if we had a protractor over here, clearly this is opening up to 50 degrees,
- and this is going another 40 degrees, so if you wanted to say
- what the measure of angle DBC is,
- it would be, it would essentially be the the sum of 40 degrees and 50 degrees.
- And let me delete all this stuff right here, to keep things clean,
- So the measure of angle DBC would be equal to 90 degrees
- and we already know that 90 degrees is a special angle,
- this is a right angle, this is a right angle.
- There's also a word for two angles whose sum add to 90 degrees,
- and that is complementary.
- So we can also say that angle DBA and angles ABC are complementary.
- And that is because their measures add up to 90 degrees,
- So the measure of angle DBA plus the measure of angle ABC,
- is equal to 90 degrees, they form a right angle when you add them up.
- And just as another point of terminology, that's kind of related to right angles,
- when you form, when a right angle is formed, the two rays that form the right angle,
- or the two lines that form that right angle, or the two line segments,
- are called perpendicular.
- So because we know the measure of angle DBC is 90 degrees,
- or that angle DBC is a right angle, this tells us
- that DB, if I call them, maybe the line segment DB is
- perpendicular, is perpendicular to line segment BC,
- or we could even say that ray BD, is instead of using the word perpendicular
- there is sometimes this symbol right here, which just shows two perpendicular lines,
- DB is perpendicular to BC
- So all of these are true statements here,
- and these come out of the fact that the angle formed between DB and BC
- that is a 90 degree angle.
- Now we have other words when our two angles add up to other things,
- so let's say for example I have one angle over here,
- that is, I'll just make up, let's just call this angle,
- let me just put some letters here to specify, 'X', 'Y' and 'Z'.
- Let's say that the measure of angle XYZ is equal to 60 degrees,
- and let's say you have another angle, that looks like this,
- and I'll call this, maybe 'M', 'N', 'O',
- and let's say that the measure of angle MNO is 120 degrees.
- So if you were to add the two measures of these, so let me write this down,
- the measure of angle MNO plus the measure of angle XYZ,
- is equal to, this is going to be equal to 120 degrees plus 60 degrees.
- Which is equal to 180 degrees, so if you add these two things up,
- you're essentially able to go halfway around the circle.
- Or throughout the entire half circle, or a semi-circle for a protractor.
- And when you have two angles that add up to 180 degrees, we call them supplementary angles
- I know it's a little hard to remember sometimes, 90 degrees is complementary,
- there are two angles complementing each other,
- and then if you add up to 180 degrees, you have supplementary angles,
- and if you have two supplementary angles that are adjacent,
- so they share a common side, so let me draw that over here,
- So let's say you have one angle that looks like this,
- And that you have another angle, so so let me put some letters here again,
- and I'll start re-using letters,
- so this is 'A', 'B', 'C', and you have another angle that looks like this,
- that looks like this, I already used 'C', that looks like this
- notice and let's say once again that this is 50 degrees,
- and this right over here is 130 degrees,
- clearly angle DBA plus angle ABC, if you add them together,
- you get 180 degrees.
- So they are supplementary, let me write that down,
- Angle DBA and angle ABC are supplementary,
- they add up to 180 degrees, but they are also adjacent angles,
- they are also adjacent, and because they are supplementary and they're adjacent,
- if you look at the broader angle, the angle formed from the sides they don't have in common,
- if you look at angle DBC, this is going to be essentially a straight line,
- which we can call a straight angle.
- So I've introduced you to a bunch of words here and now I think
- we have all of the tools we need to start doing some interesting proofs,
- and just to review here we talked about adjacent angles, and I guess any angles
- that add up to 90 degrees are considered to be complementary,
- this is adding up to 90 degrees.
- If they happen to be adjacent then the two outside sides will form a right angle,
- when you have a right angle the two sides of a right angle are considered to be
- And then if you have two angles that add up 180 degrees,
- they are considered supplementary, and then if they happen to be adjacent,
- they will form a straight angle.
- Or another way of saying itis that if you have a straight angle,
- and you have one of the angles, the other angle
- is going to be supplementary to it, they're going to add up to 180 degrees.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | http://www.khanacademy.org/math/geometry/parallel-and-perpendicular-lines/Angle_basics/v/complementary-and-supplementary-angles | 13 |
69 | Centuries ago, it was discovered that certain types of mineral rock possessed unusual properties of attraction to the metal iron. One particular mineral, called lodestone, or magnetite, is found mentioned in very old historical records (about 2500 years ago in Europe, and much earlier in the Far East) as a subject of curiosity. Later, it was employed in the aid of navigation, as it was found that a piece of this unusual rock would tend to orient itself in a north-south direction if left free to rotate (suspended on a string or on a float in water). A scientific study undertaken in 1269 by Peter Peregrinus revealed that steel could be similarly "charged" with this unusual property after being rubbed against one of the "poles" of a piece of lodestone.
Unlike electric charges (such as those observed when amber is rubbed against cloth), magnetic objects possessed two poles of opposite effect, denoted "north" and "south" after their self-orientation to the earth. As Peregrinus found, it was impossible to isolate one of these poles by itself by cutting a piece of lodestone in half: each resulting piece possessed its own pair of poles:
Like electric charges, there were only two types of poles to be found: north and south (by analogy, positive and negative). Just as with electric charges, same poles repel one another, while opposite poles attract. This force, like that caused by static electricity, extended itself invisibly over space, and could even pass through objects such as paper and wood with little effect upon strength.
The philosopher-scientist Rene Descartes noted that this invisible "field" could be mapped by placing a magnet underneath a flat piece of cloth or wood and sprinkling iron filings on top. The filings will align themselves with the magnetic field, "mapping" its shape. The result shows how the field continues unbroken from one pole of a magnet to the other:
As with any kind of field (electric, magnetic, gravitational), the total quantity, or effect, of the field is referred to as a flux, while the "push" causing the flux to form in space is called a force. Michael Faraday coined the term "tube" to refer to a string of magnetic flux in space (the term "line" is more commonly used now). Indeed, the measurement of magnetic field flux is often defined in terms of the number of flux lines, although it is doubtful that such fields exist in individual, discrete lines of constant value.
Modern theories of magnetism maintain that a magnetic field is produced by an electric charge in motion, and thus it is theorized that the magnetic field of a so-called "permanent" magnets such as lodestone is the result of electrons within the atoms of iron spinning uniformly in the same direction. Whether or not the electrons in a material's atoms are subject to this kind of uniform spinning is dictated by the atomic structure of the material (not unlike how electrical conductivity is dictated by the electron binding in a material's atoms). Thus, only certain types of substances react with magnetic fields, and even fewer have the ability to permanently sustain a magnetic field.
Iron is one of those types of substances that readily magnetizes. If a piece of iron is brought near a permanent magnet, the electrons within the atoms in the iron orient their spins to match the magnetic field force produced by the permanent magnet, and the iron becomes "magnetized." The iron will magnetize in such a way as to incorporate the magnetic flux lines into its shape, which attracts it toward the permanent magnet, no matter which pole of the permanent magnet is offered to the iron:
The previously unmagnetized iron becomes magnetized as it is brought closer to the permanent magnet. No matter what pole of the permanent magnet is extended toward the iron, the iron will magnetize in such a way as to be attracted toward the magnet:
Referencing the natural magnetic properties of iron (Latin = "ferrum"), a ferromagnetic material is one that readily magnetizes (its constituent atoms easily orient their electron spins to conform to an external magnetic field force). All materials are magnetic to some degree, and those that are not considered ferromagnetic (easily magnetized) are classified as either paramagnetic (slightly magnetic) or diamagnetic (tend to exclude magnetic fields). Of the two, diamagnetic materials are the strangest. In the presence of an external magnetic field, they actually become slightly magnetized in the opposite direction, so as to repel the external field!
If a ferromagnetic material tends to retain its magnetization after an external field is removed, it is said to have good retentivity. This, of course, is a necessary quality for a permanent magnet.
The discovery of the relationship between magnetism and electricity was, like so many other scientific discoveries, stumbled upon almost by accident. The Danish physicist Hans Christian Oersted was lecturing one day in 1820 on the possibility of electricity and magnetism being related to one another, and in the process demonstrated it conclusively by experiment in front of his whole class! By passing an electric current through a metal wire suspended above a magnetic compass, Oersted was able to produce a definite motion of the compass needle in response to the current. What began as conjecture at the start of the class session was confirmed as fact at the end. Needless to say, Oersted had to revise his lecture notes for future classes! His serendipitous discovery paved the way for a whole new branch of science: electromagnetics.
Detailed experiments showed that the magnetic field produced by an electric current is always oriented perpendicular to the direction of flow. A simple method of showing this relationship is called the left-hand rule. Simply stated, the left-hand rule says that the magnetic flux lines produced by a current-carrying wire will be oriented the same direction as the curled fingers of a person's left hand (in the "hitchhiking" position), with the thumb pointing in the direction of electron flow:
The magnetic field encircles this straight piece of current-carrying wire, the magnetic flux lines having no definite "north" or "south' poles.
While the magnetic field surrounding a current-carrying wire is indeed interesting, it is quite weak for common amounts of current, able to deflect a compass needle and not much more. To create a stronger magnetic field force (and consequently, more field flux) with the same amount of electric current, we can wrap the wire into a coil shape, where the circling magnetic fields around the wire will join to create a larger field with a definite magnetic (north and south) polarity:
The amount of magnetic field force generated by a coiled wire is proportional to the current through the wire multiplied by the number of "turns" or "wraps" of wire in the coil. This field force is called magnetomotive force (mmf), and is very much analogous to electromotive force (E) in an electric circuit.
An electromagnet is a piece of wire intended to generate a magnetic field with the passage of electric current through it. Though all current-carrying conductors produce magnetic fields, an electromagnet is usually constructed in such a way as to maximize the strength of the magnetic field it produces for a special purpose. Electromagnets find frequent application in research, industry, medical, and consumer products.
As an electrically-controllable magnet, electromagnets find application in a wide variety of "electromechanical" devices: machines that effect mechanical force or motion through electrical power. Perhaps the most obvious example of such a machine is the electric motor.
Another example is the relay, an electrically-controlled switch. If a switch contact mechanism is built so that it can be actuated (opened and closed) by the application of a magnetic field, and an electromagnet coil is placed in the near vicinity to produce that requisite field, it will be possible to open and close the switch by the application of a current through the coil. In effect, this gives us a device that enables elelctricity to control electricity:
Relays can be constructed to actuate multiple switch contacts, or operate them in "reverse" (energizing the coil will open the switch contact, and unpowering the coil will allow it to spring closed again).
If the burden of two systems of measurement for common quantities (English vs. metric) throws your mind into confusion, this is not the place for you! Due to an early lack of standardization in the science of magnetism, we have been plagued with no less than three complete systems of measurement for magnetic quantities.
First, we need to become acquainted with the various quantities associated with magnetism. There are quite a few more quantities to be dealt with in magnetic systems than for electrical systems. With electricity, the basic quantities are Voltage (E), Current (I), Resistance (R), and Power (P). The first three are related to one another by Ohm's Law (E=IR ; I=E/R ; R=E/I), while Power is related to voltage, current, and resistance by Joule's Law (P=IE ; P=I2R ; P=E2/R).
With magnetism, we have the following quantities to deal with:
Magnetomotive Force -- The quantity of magnetic field force, or "push." Analogous to electric voltage (electromotive force).
Field Flux -- The quantity of total field effect, or "substance" of the field. Analogous to electric current.
Field Intensity -- The amount of field force (mmf) distributed over the length of the electromagnet. Sometimes referred to as Magnetizing Force.
Flux Density -- The amount of magnetic field flux concentrated in a given area.
Reluctance -- The opposition to magnetic field flux through a given volume of space or material. Analogous to electrical resistance.
Permeability -- The specific measure of a material's acceptance of magnetic flux, analogous to the specific resistance of a conductive material (ρ), except inverse (greater permeability means easier passage of magnetic flux, whereas greater specific resistance means more difficult passage of electric current).
But wait . . . the fun is just beginning! Not only do we have more quantities to keep track of with magnetism than with electricity, but we have several different systems of unit measurement for each of these quantities. As with common quantities of length, weight, volume, and temperature, we have both English and metric systems. However, there is actually more than one metric system of units, and multiple metric systems are used in magnetic field measurements! One is called the cgs, which stands for Centimeter-Gram-Second, denoting the root measures upon which the whole system is based. The other was originally known as the mks system, which stood for Meter-Kilogram-Second, which was later revised into another system, called rmks, standing for Rationalized Meter-Kilogram-Second. This ended up being adopted as an international standard and renamed SI (Systeme International).
And yes, the µ symbol is really the same as the metric prefix "micro." I find this especially confusing, using the exact same alphabetical character to symbolize both a specific quantity and a general metric prefix!
As you might have guessed already, the relationship between field force, field flux, and reluctance is much the same as that between the electrical quantities of electromotive force (E), current (I), and resistance (R). This provides something akin to an Ohm's Law for magnetic circuits:
And, given that permeability is inversely analogous to specific resistance, the equation for finding the reluctance of a magnetic material is very similar to that for finding the resistance of a conductor:
In either case, a longer piece of material provides a greater opposition, all other factors being equal. Also, a larger cross-sectional area makes for less opposition, all other factors being equal.
The major caveat here is that the reluctance of a material to magnetic flux actually changes with the concentration of flux going through it. This makes the "Ohm's Law" for magnetic circuits nonlinear and far more difficult to work with than the electrical version of Ohm's Law. It would be analogous to having a resistor that changed resistance as the current through it varied (a circuit composed of varistors instead of resistors).
The nonlinearity of material permeability may be graphed for better understanding. We'll place the quantity of field intensity (H), equal to field force (mmf) divided by the length of the material, on the horizontal axis of the graph. On the vertical axis, we'll place the quantity of flux density (B), equal to total flux divided by the cross-sectional area of the material. We will use the quantities of field intensity (H) and flux density (B) instead of field force (mmf) and total flux (Φ) so that the shape of our graph remains independent of the physical dimensions of our test material. What we're trying to do here is show a mathematical relationship between field force and flux for any chunk of a particular substance, in the same spirit as describing a material's specific resistance in ohm-cmil/ft instead of its actual resistance in ohms.
This is called the normal magnetization curve, or B-H curve, for any particular material. Notice how the flux density for any of the above materials (cast iron, cast steel, and sheet steel) levels off with increasing amounts of field intensity. This effect is known as saturation. When there is little applied magnetic force (low H), only a few atoms are in alignment, and the rest are easily aligned with additional force. However, as more flux gets crammed into the same cross-sectional area of a ferromagnetic material, fewer atoms are available within that material to align their electrons with additional force, and so it takes more and more force (H) to get less and less "help" from the material in creating more flux density (B). To put this in economic terms, we're seeing a case of diminishing returns (B) on our investment (H). Saturation is a phenomenon limited to iron-core electromagnets. Air-core electromagnets don't saturate, but on the other hand they don't produce nearly as much magnetic flux as a ferromagnetic core for the same number of wire turns and current.
Another quirk to confound our analysis of magnetic flux versus force is the phenomenon of magnetic hysteresis. As a general term, hysteresis means a lag between input and output in a system upon a change in direction. Anyone who's ever driven an old automobile with "loose" steering knows what hysteresis is: to change from turning left to turning right (or vice versa), you have to rotate the steering wheel an additional amount to overcome the built-in "lag" in the mechanical linkage system between the steering wheel and the front wheels of the car. In a magnetic system, hysteresis is seen in a ferromagnetic material that tends to stay magnetized after an applied field force has been removed (see "retentivity" in the first section of this chapter), if the force is reversed in polarity.
Let's use the same graph again, only extending the axes to indicate both positive and negative quantities. First we'll apply an increasing field force (current through the coils of our electromagnet). We should see the flux density increase (go up and to the right) according to the normal magnetization curve:
Next, we'll stop the current going through the coil of the electromagnet and see what happens to the flux, leaving the first curve still on the graph:
Due to the retentivity of the material, we still have a magnetic flux with no applied force (no current through the coil). Our electromagnet core is acting as a permanent magnet at this point. Now we will slowly apply the same amount of magnetic field force in the opposite direction to our sample:
The flux density has now reached a point equivalent to what it was with a full positive value of field intensity (H), except in the negative, or opposite, direction. Let's stop the current going through the coil again and see how much flux remains:
Once again, due to the natural retentivity of the material, it will hold a magnetic flux with no power applied to the coil, except this time its in a direction opposite to that of the last time we stopped current through the coil. If we re-apply power in a positive direction again, we should see the flux density reach its prior peak in the upper-right corner of the graph again:
The "S"-shaped curve traced by these steps form what is called the hysteresis curve of a ferromagnetic material for a given set of field intensity extremes (-H and +H). If this doesn't quite make sense, consider a hysteresis graph for the automobile steering scenario described earlier, one graph depicting a "tight" steering system and one depicting a "loose" system:
Just as in the case of automobile steering systems, hysteresis can be a problem. If you're designing a system to produce precise amounts of magnetic field flux for given amounts of current, hysteresis may hinder this design goal (due to the fact that the amount of flux density would depend on the current and how strongly it was magnetized before!). Similarly, a loose steering system is unacceptable in a race car, where precise, repeatable steering response is a necessity. Also, having to overcome prior magnetization in an electromagnet can be a waste of energy if the current used to energize the coil is alternating back and forth (AC). The area within the hysteresis curve gives a rough estimate of the amount of this wasted energy.
Other times, magnetic hysteresis is a desirable thing. Such is the case when magnetic materials are used as a means of storing information (computer disks, audio and video tapes). In these applications, it is desirable to be able to magnetize a speck of iron oxide (ferrite) and rely on that material's retentivity to "remember" its last magnetized state. Another productive application for magnetic hysteresis is in filtering high-frequency electromagnetic "noise" (rapidly alternating surges of voltage) from signal wiring by running those wires through the middle of a ferrite ring. The energy consumed in overcoming the hysteresis of ferrite attenuates the strength of the "noise" signal. Interestingly enough, the hysteresis curve of ferrite is quite extreme:
While Oersted's surprising discovery of electromagnetism paved the way for more practical applications of electricity, it was Michael Faraday who gave us the key to the practical generation of electricity: electromagnetic induction. Faraday discovered that a voltage would be generated across a length of wire if that wire was exposed to a perpendicular magnetic field flux of changing intensity.
An easy way to create a magnetic field of changing intensity is to move a permanent magnet next to a wire or coil of wire. Remember: the magnetic field must increase or decrease in intensity perpendicular to the wire (so that the lines of flux "cut across" the conductor), or else no voltage will be induced:
Faraday was able to mathematically relate the rate of change of the magnetic field flux with induced voltage (note the use of a lower-case letter "e" for voltage. This refers to instantaneous voltage, or voltage at a specific point in time, rather than a steady, stable voltage.):
The "d" terms are standard calculus notation, representing rate-of-change of flux over time. "N" stands for the number of turns, or wraps, in the wire coil (assuming that the wire is formed in the shape of a coil for maximum electromagnetic efficiency).
This phenomenon is put into obvious practical use in the construction of electrical generators, which use mechanical power to move a magnetic field past coils of wire to generate voltage. However, this is by no means the only practical use for this principle.
If we recall that the magnetic field produced by a current-carrying wire was always perpendicular to that wire, and that the flux intensity of that magnetic field varied with the amount of current through it, we can see that a wire is capable of inducing a voltage along its own length simply due to a change in current through it. This effect is called self-induction: a changing magnetic field produced by changes in current through a wire inducing voltage along the length of that same wire. If the magnetic field flux is enhanced by bending the wire into the shape of a coil, and/or wrapping that coil around a material of high permeability, this effect of self-induced voltage will be more intense. A device constructed to take advantage of this effect is called an inductor, and will be discussed in greater detail in the next chapter.
If two coils of wire are brought into close proximity with each other so the magnetic field from one links with the other, a voltage will be generated in the second coil as a result. This is called mutual inductance: when voltage impressed upon one coil induces a voltage in another.
A device specifically designed to produce the effect of mutual inductance between two or more coils is called a transformer.
The device shown in the above photograph is a kind of transformer, with two concentric wire coils. It is actually intended as a precision standard unit for mutual inductance, but for the purposes of illustrating what the essence of a transformer is, it will suffice. The two wire coils can be distinguished from each other by color: the bulk of the tube's length is wrapped in green-insulated wire (the first coil) while the second coil (wire with bronze-colored insulation) stands in the middle of the tube's length. The wire ends run down to connection terminals at the bottom of the unit. Most transformer units are not built with their wire coils exposed like this.
Because magnetically-induced voltage only happens when the magnetic field flux is changing in strength relative to the wire, mutual inductance between two coils can only happen with alternating (changing -- AC) voltage, and not with direct (steady -- DC) voltage. The only applications for mutual inductance in a DC system is where some means is available to switch power on and off to the coil (thus creating a pulsing DC voltage), the induced voltage peaking at every pulse.
A very useful property of transformers is the ability to transform voltage and current levels according to a simple ratio, determined by the ratio of input and output coil turns. If the energized coil of a transformer is energized by an AC voltage, the amount of AC voltage induced in the unpowered coil will be equal to the input voltage multiplied by the ratio of output to input wire turns in the coils. Conversely, the current through the windings of the output coil compared to the input coil will follow the opposite ratio: if the voltage is increased from input coil to output coil, the current will be decreased by the same proportion. This action of the transformer is analogous to that of mechanical gear, belt sheave, or chain sprocket ratios:
A transformer designed to output more voltage than it takes in across the input coil is called a "step-up" transformer, while one designed to do the opposite is called a "step-down," in reference to the transformation of voltage that takes place. The current through each respective coil, of course, follows the exact opposite proportion.
Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information.
Jason Starck (June 2000): HTML document formatting, which led to a much better-looking second edition.
Lessons In Electric Circuits copyright (C) 2000-2013 Tony R. Kuphaldt, under the terms and conditions of the Design Science License. | http://www.ibiblio.org/kuphaldt/electricCircuits/DC/DC_14.html | 13 |
65 | Statics/Forces As Vectors
Vectors, Chapter 1.1 - 1.7
Scalars and Vectors and force
A scalar is a quantity possessing only a magnitude. Examples include mass, volume, and length. In this book, scalars are represented by letters in italic type: . Scalar quantities may be manipulated following the rules of simple algebra.
A vector is a quantity that has both a magnitude and a direction. Examples include velocity, position, and force. In this book, vectors are represented by letters with arrows over them: . Vector quantities are manipulated using vector mathematics, which is described in some detail in the following section.
Examples of Vector vs. Scalar Quantities
Velocity vs. Speed
Consider a car traveling South at a speed of 110 km per hour.
We can describe the motion of the car as a velocity with a magnitude of 110 km per hour and a direction of south. The velocity is a vector because it indicates magnitude and direction.
The motion of the car could also be described as a scalar by saying the speed is 110km per hour and ignoring the direction. Speed is a scalar because it consists of only a magnitude.
An applied force is a vector quantity having both a magnitude and a direction.
Consider a hot air balloon suspended over a farmer's field at a constant altitude. The bouyant force on the ballon pushes the balloon up. At the same time, gravity exerts a force on the balloon pulling the balloon down.
The bouyant force and the force due to gravity act in opposite directions. If they have the same magnitude, the balloon will remain suspended at the same altitude. If the bouyant force is greater than the force due to gravity, the balloon will rise. If the force due to gravity is greater than the bouyant force, than the balloon will fall.
Vector Representation (Two Dimensions)
Forces, and any other vector, may be represented in a number of ways.
A vector may be represented graphically by an arrow. The magnitude of the vector corresponds to the length of the arrow, and the direction of the vector corresponds to the angle between the arrow and a coordinate axis. The sense of the direction is indicated by the arrowhead.
Polar Notation
In polar notation, the vector is represented by the magnitude of the vector, and its angle from the coordinate axis, in the form:
Component Notation
In component notation, a vector is represented by the magnitude of the components of the vector along the coordinate axes.
Scalar Notation
The components of the vector are represented as scalar values which are positive if their sense is in the same direction as the coordinate axis, and negative if their sense is opposite of the coordinate axis. For a vector with a positive x-component and a negative y-component:
Cartesian Notation
The components of the vector are represented as positive scalar values multiplied by cartesian unit vectors. Cartesian unit vectors are vectors with a magnitude of one that represent the direction of the coordinate axes. The unit vector represents the x-axis, and the unit vector represents the y-axis. The vectors sense is indicated by the sign of the unit vector. For a vector with a positive x-component and a negative y-component:
Forces as Vectors
In engineering statics, we often convert forces into to component notation. Replacing a force with its components makes it easier to compute the resultant of a group of forces acting on a body. Conversion of a force from polar to component notation is accomplished by the following transformations:
Example 1
Consider a force, , with a magnitude, , of 100 N acting in the x-y plane. This force acts at an angle, , of 30 degrees to x-axis. What is this force in component notation?
We can replace this force with a pair of forces acting along the x-axis and the y-axis as follows.
Example 2
Two forces acting in the x-y plane are acting on a point. The first force is 100 N at an angle of 0 degrees. The second force is 50 N acting at an angle of 60 degrees. What is the resultant?
First, resolve the forces into their x and y components.
Sum all the forces in the x-direction.
Sum all the forces in the y-direction.
Finally, convert the resultant components back into polar notation.
The resultant force is 132.3 N at an angle of 19.1 degrees. | http://en.wikibooks.org/wiki/Statics/Forces_As_Vectors | 13 |
83 | Modeling Word Problems
Using models is a critical step in helping students transition from concrete manipulative work with word problems to the abstract step of generating an equation to solve contextual problems. By learning to use simple models to represent key mathematical relationships in a word problem, students can more easily make sense of word problems, recognize both the number relationships in a given problem and connections among types of problems, and successfully solve problems with the assurance that their solutions are reasonable.
Why is modeling word problems important?
Mr. Alexander and teachers from his grade level team were talking during their Professional Learning Community (PLC) meeting about how students struggle with word problems. Everyone felt only a few of their students seem to be able to quickly generate the correct equation to solve the problem. Many students just seem to look for some numbers and do something with them, hoping they solve the problem.
Mr. Alexander had recently learned about using modeling for word problems in a workshop he had attended. He began to share the model diagrams with his teammates and they were excited to see how students might respond to this approach. They even practiced several model diagrams among themselves as no one had ever learned to use models with word problems. Since part of their PLC work freed them up to observe lessons in each others' rooms, they decided they would watch Mr. Alexander introduce modeling to his students.
So, two days later they gathered in Mr. Alexander's room for the math lesson. Mr. Alexander presented the following problem:
Lily and her brother, Scotty, were collecting cans for the recycling drive. One weekend they collected 59 cans and the next weekend they collected 85 cans. How many cans were collected in all?
Mr. Alexander went over the problem and drew a rectangular bar divided into two parts on the board, explaining that each part of the rectangle was for the cans collected on one of the weekends and the bracket indicated how many cans were collected in all. Reviewing the problem, Mr. Alexander asked students what was not known, and where the given numbers would go and why. This resulted in the following bar model:
The class then discussed what equations made sense given the relationship of the numbers in the bar model. This time many students wrote the equation, 59 + 85 = ?, and solved the problem. In their discussion after the lesson, Mr. Alexander's teammates mentioned that they noticed a much higher degree of interest and confidence in problem solving when Mr. Alexander introduced the bar model. Everyone noticed that many more students were successful in solving problems once modeling was introduced and encouraged. As the class continued to do more word problems, the diagrams appeared to be a helpful step in scaffolding success with word problems.
Word problems require that students have the skills to read, understand, strategize, compute, and check their work. That's a lot of skills! Following a consistent step-by-step approach-and providing explicit, guided instruction in the beginning - can help our students organize their thoughts and make the problem-solving task manageable.
Forsten, 2010, p.1
Students often have regarded each word problem as a new experience, and fail to connect a given problem to past problems of a similar type. Students need to sort out the important information in a word problem, and identify the relationships among the numbers involved in the situation. A model can help students organize their thinking about a given problem, and identify an equation that would be helpful in solving the problem. Models are a kind of graphic organizer for the numbers in a word problem, and may connect to students' work with graphic organizers in other subjects.
The failure to capture the mathematics being taught with a picture that helps students visualize what is going on is one of the most serious missed opportunities I observe.
Leinwand, 2009, p.19
Modeling can begin with young learners with basic addition, subtraction, multiplication, and division problems. Modeling can be extended to ratio, rate, percent, multi-step, and other complex problems in the upper grades. Utilizing modeling on a routine basis in early grades can lay an important foundation for later work, including the transition to algebra, by stressing patterns, generalizations, and how numbers relate to each other.
Knowledge can be represented both linguistically and nonlinguistically. An increase in nonlinguistic representations allows students to better recall knowledge and has a strong impact on student achievement (Marzano, et. al., 2001, Section 5). In classic education research, Bruner (1961) identified three modes of learning: enactive (manipulating concrete objects), iconic (pictures or diagrams), and symbolic (formal equation). The iconic stage, using pictures and diagrams, is an important bridge to abstracting mathematical ideas using the symbols of an equation. Research has also validated that students need to see an idea in multiple representations in order to identify and represent the common core (Dienes, undated). For word problems involving the operation of addition, students need to experience several types of problems to generalize that when two parts are joined they result in a total or a quantity that represents the whole. Whether the items are bears, balloons, or cookies no longer matters as the students see the core idea of two subsets becoming one set. Dienes discovered that this abstraction is only an idea; therefore it is hard to represent. Diagrams can capture the similarity students notice in addition/joining problems where both addends are known and the total or whole is the unknown. Diagrams will also be useful for missing addend situations. Like Bruner, Dienes saw diagrams as an important bridge to abstracting and formalizing mathematical ideas.
Along with Bruner and Dienes, Skemp (1993) identified the critical middle step in moving from a real-life situation to the abstractness of an equation. While students need to experience many real-life situations they will get bogged down with the "noise" of the problem such as names, locations, kinds of objects, and other details. It is the teacher's role to help students sort through the noise to capture what matters most for solving the problem. A diagram can help students capture the numerical information in a problem, and as importantly, the relationship between the numbers, e.g. Do we know both the parts, or just one of the parts and the whole? Are the parts similar in size, or is one larger than the other? Once students are comfortable with one kind of diagram, they can think about how to relate it to a new situation. A student who has become proficient with using a part-part-whole bar model diagram when the total or whole is unknown, (as in the collecting cans problem in Mr. Alexander's class), cannot only use the model in other part-part-whole situations, but can use it in new situations, for example, a missing addend situation. Given several missing addend situations, students may eventually generalize that these will be subtractive situations, solvable by either a subtraction or adding on equation.
The work of Bruner, Dienes and Skemp informed the development of computation diagrams in some elementary mathematics curriculum materials in the United States. Interestingly, it also informed the development of curriculum in Singapore, as they developed the "Thinking Schools, Thinking Nation" era of reforming their educational model and instructional strategies (Singapore Ministry of Education, 1997). The bar model is a critical part of "Singapore Math." It is used and extended across multiple grades to capture the relationships within mathematical problems. Singapore has typically scored near the top of the world on international assessments, a possible indicator of the strong impact of including the visual diagram step to represent and solve mathematical problems.
What is modeling word problems?
Models at any level can vary from simple to complex, realistic to representational. Young students often solve beginning word problems, acting them out, and modeling them with the real objects of the problem situation, e.g. teddy bears or toy cars. Over time they expand to using representational drawings, initially drawing pictures that realistically portray the items in a problem, and progressing to multi-purpose representations such as circles or tally marks. After many concrete experiences with real-life word problems involving joining and separating, or multiplying and dividing objects, teachers can transition students to inverted-V and bar model drawings which are multi-purpose graphic organizers tied to particular types of word problems.
Modeling Basic Number Relationships
Simple diagrams, sometimes known as fact triangles, math mountains, situation diagrams, or representational diagrams have appeared sporadically in some curriculum materials. But students' problem solving and relational thinking abilities would benefit by making more routine use of these diagrams and models.
Young children can begin to see number relationships that exist within a fact family through the use of a model from which they derive equations. An inverted-V is one simple model that helps students see the addition/subtraction relationships in a fact family, and can be used with word problems requiring simple joining and separating. The inverted-V model can be adapted for multiplication and division fact families. For addition, students might think about the relationships among the numbers in the inverted-V in formal terms, addend and sum, or in simpler terms, part and total, as indicated in the diagrams below.
A specific example for a given sum of 10 would be the following, depending on which element of the problem is unknown.
6 + 4 = ? 6 + ? = 10 ? + 4 = 1
4 + 6 = ? 10 - 6 = ? 10 - 4 = ?
While often used with fact families, and the learning of basic facts, inverted-V diagrams can also work well with solving word problems. Students need to think about what they know and don't know in a word problem - are both the parts known, or just one of them? By placing the known quantities correctly into the inverted-V diagram, students are more likely to determine a useful equation for solving the problem, and see the result as reasonable for the situation. For example, consider the following problem:
Zachary had 10 train cars. Zachary gave 3 train cars to his brother. How many train cars does Zachary have now?
Students should determine they know how many Zachary started with (total or whole), and how many he gave away (part of the total). So, they need to find out how many are left (other part of the total). The following inverted-V diagram represents the relationships among the numbers of this problem:
3 + ? = 10 or 10 - 3 = ?, so Zachary had 7 train cars left.
As students move on to multiplication and division, the inverted-V model can still be utilized in either the repeated addition or multiplicative mode. Division situations do not require a new model; division is approached as the inverse of multiplication or a situation when one of the factors is unknown.
Again, the inverted-V diagram can be useful in solving multiplication and division word problems. For example, consider the following problem:
Phong planted 18 tomato plants in 3 rows. If each row had the same number of plants, how many plants were in each row?
Students can see that they know the product and the number OF rows. The number IN A row is unknown. Either diagram below may help solve this problem, convincing students that 6 in a row is a reasonable answer.
While the inverted-V diagram can be extended to multi-digit numbers, it has typically been used with problems involving basic fact families. Increasing the use of the inverted-V model diagram should heighten the relationship among numbers in a fact family making it a useful, quick visual for solving simple word problems with the added benefit of using and increasing the retention of basic facts.
Models and Problem Types for Computation
As children move to multi-digit work, teachers can transition students to bar model drawings, quick sketches that help students see the relationships among the important numbers in a word problem and identify what is known and unknown in a situation.
Although there are a number of ways that word problems can be distinguished from each other, one of the most useful ways of classifying them focuses on the types of action or relationships described in the problems. This classification corresponds to the way that children think about problems.
Carpenter, et.al, 1999, p. 7
Bar models work well with recognition of problem types. There are four basic types for addition and subtraction word problems: 1) join (addition), 2) separate (subtraction), 3) part-part-whole, and 4) comparison (Carpenter, Fennema, Franke, Levi, & Empson, 1999, Chapter 2). Within each of the first three types, either the sum (whole or total), or one of the addends (parts) can be the unknown. For a comparison problem, either the larger quantity, smaller quantity, or the difference can be unknown.
By introducing students to bar models a teacher has an important visual to facilitate student thinking about the mathematical relationships among the numbers of a given word problem.
With bar models the relationships among numbers in all these types of problems becomes more transparent, and helps bridge student thinking from work with manipulatives and drawing pictures to the symbolic stage of writing an equation for a situation. With routine use of diagrams and well-facilitated discussions by teachers, student will begin to make sense of the parts of a word problem and how the parts relate to each other.
Part-Part-Whole Problems. Part-Part-Whole problems are useful with word problems that are about sets of things, e.g. collections. They are typically more static situations involving two or more subsets of a whole set. Consider the problem,
Cole has 11 red blocks and 16 blue blocks. How many blocks does Cole have in all?
Students may construct a simple rectangle with two parts to indicate the two sets of blocks that are known (parts/addends). It is not important to have the parts of the rectangle precisely proportional to the numbers in the problem, but some attention to their relative size can aid in solving the problem. The unknown in this problem is how many there are altogether (whole/total/sum), indicated by a bracket (or an inverted-V) above the bar, indicating the total of the 2 sets of blocks. The first bar model below reflects the information in the problem about Cole's blocks.
11 + 16 = ? so Cole has 27 blocks in all.
A similar model would work for a problem where the whole amount is known, but one of the parts (a missing addend) is the unknown. For example:
Cole had 238 blocks. 100 of them were yellow. If all Cole's blocks are either blue or yellow, how many were blue?
The following bar model would be useful in solving this problem.
100 + ? = 238 or 238 - 100 = ? so Cole has 138 blue blocks.
The answer has to be a bit more than 100 because 100 + 100 is 200 but the total here is 238 so the blue blocks have to be a bit more than 100.
The part-part-whole bar model can easily be expanded to large numbers, and other number types such as fractions and decimals. Consider the problem:
Leticia read 7 ½ books for the read-a-thon.She wants to read 12 books in all. How many more books does she have to read?
The first diagram below reflects this problem. Any word problem that can be thought of as parts and wholes is responsive to bar modeling diagrams. If a problem has multiple addends, students just draw enough parts in the bar to reflect the number of addends or parts, and indicate whether one of the parts, or the whole/sum, is the unknown, as shown in the second figure below.
12 - 7 ½ = ? or 7 ½ + ? = 12 so Leticia needs to read 4 ½ more books.
Join (Addition) and Separate (Subtraction) Problems.
Students who struggle with deciding whether they need to add or subtract, or later to multiply or divide, find the organizing potential of the bar model incredibly helpful.
Leinwand, 2009, p. 23
Some addition and subtraction problems have a stated action - something is added to or separated from a beginning quantity. While often considered a different problem type from the more static part-part-whole problems, join and separate problems can also use a rectangular bar model to represent the quantities involved. Students need to think about whether something is being joined (added) to an amount, or if something is being separated (subtracted). In addition the bracket indicates the total that will result when the additive action is completed. In whole number subtraction, a starting quantity is indicated by the bracket. It is decreased by an amount that is separated or taken away, resulting in a number that indicates what is remaining.
Consider this joining problem:
Maria had $20. She got $11 more dollars for babysitting. How much money does she have now?
Students can identify that the starting amount of $20 is one of the parts, $11 is another part (the additive amount), and the unknown is the sum/whole amount, or how much money she has now. The first diagram below helps represent this problem.
Consider the related subtractive situation:
Maria had $31. She spent some of her money on a new CD. Maria now has $16 left.
The second diagram above represents this situation. Students could use the model to help them identify that the total or sum is now $31, one of the parts (the subtractive change) is unknown, so the other part is the $16 she has left.
Comparison Problems. Comparison problems have typically been seen as difficult for children. This may partially be due to an emphasis on subtraction as developed in word problems that involve "take away" situations rather than finding the "difference" between two numbers. Interestingly, studies in countries that frequently use bar models have determined that students do not find comparison problems to be much more difficult than part-part-whole problems (Yeap, 2010, pp. 88-89).
A double bar model can help make comparison problems less mysterious. Basically, comparison problems involve two quantities (either one quantity is greater than the other one, or they are equal), and a difference between the quantities. Two bars, one representing each quantity, can be drawn with the difference being represented by the dotted area added onto the lesser amount. For example, given the problem:
Tameka rode on 26 county fair rides. Her friend, Jackson, rode on 19 rides. How many more rides did Tameka ride on than Jackson?
Students might generate the comparison bars diagram shown below, where the greater quantity, 26, is the longer bar. The dotted section indicates the difference between Jackson's and Tameka's quantities, or how much more Tameka had than Jackson, or how many more rides Jackson would have had to have ridden to have the same number of rides as Tameka.
26 - 19 = ? or 19 + ? = 26; the difference is 7 so Tameka rode 7 more rides.
Comparison problems express several differently worded relationships. If Tameka rode 7 more rides than Jackson, Jackson rode 7 fewer rides than Tameka. Variations of the double bar model diagram can make differently worded relationships more visual for students. It is often helpful for students to recognize that at some point both quantities have the same amount, as shown in the model below by the dotted line draw up from the end of the rectangle representing the lesser quantity. But one of the quantities has more than that, as indicated by the area to the right of the dotted line in the longer bar. The difference between the quantities can be determined by subtracting 19 from 26, or adding up from 19 to 26 and getting 7, meaning 26 is 7 more than 19 or 19 is 7 less than 26.
Comparison word problems are especially problematic for English Learners as the question can be asked several ways. Modifying the comparison bars may make the questions more transparent. Some variations in asking questions about the two quantities of rides that Tameka and Jackson rode might be:
- How many more rides did Tameka ride than Jackson?
- How many fewer rides did Jackson go on than Tameka?
- How many more rides would Jackson have had to ride to have ridden the same number of rides as Tameka?
- How many fewer rides would Tameka have had to ride to have ridden the same number of rides as Jackson?
Comparisons may also be multiplicative. Consider the problem:
Juan has 36 CDs in his collection. This is 3 times the amount of CDs that his brother, Marcos, has. How many CDs does Marcos have?
In this situation, students would construct a bar model, shown below on the left, with 3 parts. Students could divide the 36 into 3 equal groups to show the amount that is to be taken 3 times to create 3 times as many CDs for Juan.
36 ¸ 3 = ? or 3 x ? = 36 12 + 12 + 12 = ? (or 3 x 12 = ?)
so Marcos has 12 CDs. so Juan has 36 CDs.
A similar model can be used if the greater quantity is unknown, but the lesser quantity, and the multiplicative relationship are both known. If the problem was:
Juan has some CDs. He has 3 times as many CDs as Marcos who has 12 CDs. How many CDs does Juan have?
As seen in the diagram above on the right, students could put 12 in a box to show the number of CDs Marcos has; then duplicate that 3 times to sow that Juan has 3 times as many CDs. Then the total number that Juan has would be the sum of those 3 parts.
Multiplication and Division Problems. The same model used for multiplicative comparisons will also work for basic multiplication word problems, beginning with single digit multipliers. Consider the problem:
Alana had 6 packages of gum. Each package holds 12 pieces of gum. How many pieces of gum does Alana have in all?
The following bar model uses a repeated addition view of multiplication to visualize the problem.
12 + 12 + 12 + 12 + 12 + 12 = 72 (or 6 x 12 = 72)
so Alana has 72 pieces of gum.
As students move into multi-digit multipliers, they can use a model that incorporates an ellipsis to streamline the bar model. For example:
Sam runs 32 km a day during April to get ready for a race. If Sam runs every day of the month, how many total kilometers did he run in April?
30 x 32 km = 30 x 30 km + 30 x 2 km = 960 km
Sam ran 960 km during the 30 days of April.
Since division is the inverse of multiplication, division word problems will utilize the multiplicative bar model where the product (dividend) is known, but one of the factors (divisor or quotient) is the unknown.
Problems Involving Rates, Fractions, Percent & Multiple Steps. As students progress through the upper grades, they can apply new concepts and multi-step word problems to bar model drawings. Skemp (1993) identified the usefulness of relational thinking as critical to mathematical development. A student should be able to extend their thinking based on models they used earlier, by relating and adapting what they know to new situations.
Consider this rate and distance problem:
Phong traveled 261 miles to see her grandmother. She averaged 58 mph. How long did it take her to get to her grandmother's house?
The following model builds off of the part-part-whole model using a repeated addition format for multiplication and division. It assumes that students have experience with using the model for division problems whose quotients are not just whole numbers. As they build up to (or divide) the total of 261 miles, they calculate that five 58's will represent 5 hours of travel, and the remaining 29 miles would be represented by a half box, so the solution is it would take Phong 5½ hours of driving time to get to her grandmother's house.
Even a more complex rate problem can be captured with a combination of similar models. Consider this problem:
Sue and her friend Anne took a trip together. Sue drove the first 2/5 of the trip and Anne drove 210 miles for the last 3/5 of the trip. Sue averaged 60 mph and Anne averaged 70 mph. How long did the trip take them?
There are several ways students might combine or modify a basic bar model. One solution might be the following, where the first unknown is how many miles Sue drove. A bar divided into fifths represents how to calculate the miles Sue drove. Since we know that the 210 miles Anne drove is 3/5 of the total trip, each one of Anne's boxes, each representing 1/5 of the trip, is 70 miles. Therefore, Sue drove two 70 mile parts, or 140 miles, to equal 2/5 of the total trip.
The diagram now needs to be extended to show how to calculate the number of hours. Anne's 210 mile segment, divided by her 70 mph rate will take 3 hours, as recorded on the following extension of the diagram. Sue's distance of 140 miles now needs to be divided into 60 mph segments to determine her driving time of 2 1/3 hours. So, the total trip of 350 miles would take 5 1/3 hours of driving time, considering the two driving rates.
Certainly, a foundation of using simple bar model drawings needs to be well developed in early grades for students to extend diagrams with understanding in later grades. The Sue and Anne rate-time-distance problem would not be the place to begin using bar models! But, by building on work in earlier grades with models, this extended model makes the mathematics of this complex problem more transparent, and helps students think through the steps.
Consider a simpler multi-step problem:
Roberto purchased 5 sports drinks at $1.25 each. Roberto gave the cashier $20. How much change did he get back?
Again, there may be student variations when they begin to extend the use of diagrams in multi-step or more complex problems. Some students might use two diagrams at once, as show below on the left. Others may indicate computation within one diagram, as shown in the diagram on the right.
With routine experience with bar modeling, students can extend the use of the models to problems involving relationships that can be expressed with variables. Consider this simple problem that could be represented algebraically:
Callan and Avrielle collected a total of 190 bugs for a science project. Callan collected 10 more bugs than Avrielle. How many bugs did Callan collect?
Let n equal the number of bugs Avrielle collected, and n + 10 equal the number of bugs Callan collected. The following model might be created by students:
Since n + n = 180 (or 2n = 180), n = 90. Therefore, Callan collected 90 + 10 or 100 bugs and Avrielle collected 90 bugs for a total of 190 bugs collected together.
In using the model method, students have to translate information and relationships in words into visual representations, which are the models. They also have to manipulate and transform the visual representations to generate information that is useful in solving given problems.
In using algebraic methods, students similarly engage in these processes . . . The model method provides a platform where students engage in such algebraic processes using the less abstract visual medium.
Yaep, 2010. p.162
Understanding the structure of a word problem involves knowing how the mathematical information in a given word problem is related, and how to extract the components needed for solving the problem. Bar model drawings can help students become more proficient at identifying the variables involved in a problem as well as the relationship(s) between them. This ability to focus on the relationships among the numbers in a given problem, and to recognize the mathematical structure as a particular type of problem, is part of relational thinking - a critical skill for success in algebra. Building inverted-V and bar models into pre-algebraic work in grades K-7 can make students more powerfully ready for the formal study of algebra.
Planning and Instruction
How do I intentionally plan for and use modeling?
If modeling is not a way you learned to identify the important information and numerical relationships in word problems, you may want to review some of the resources on problem types (see Carpenter's book in References and Resources section below), or bar modeling (see books by Forsten, Walker, or Yeap in the References and Resources section below). You may also want to practice the different types of models. Decide which are most accessible for your students, and start with introducing one model at a time, helping students determine what is unknown in the problem, and where that unknown and the other numerical information should be placed in the bar model. A question mark, box, or a variable can be used for the unknown. As students become comfortable with that model, introduce, and compare and contrast a second model with the known model.
You might introduce bar model drawings, or inverted-V diagrams, when there is a unit in your curriculum that contains several word problems. If word problems are sporadic in your curriculum, you might introduce a "Word-Problem-of-the-Day" format where students solve a problem, or cluster of related problems, each day.
To emphasize model drawings, you might have students take a set of problems, and classify them as to which model would help them solve the problem, or do a matching activity between word problems and model drawings. Ask students to explain why a particular equation matches a model and would be useful in finding the solution. Another activity is to present a bar model with some numerical information and an unknown. Then ask students to write a word problem that could logically be solved using that model. Ask students to explain why the word problem created matches a diagram well. As students use models for solving word problems, they may generate different equations to solve a problem even though their models are the same. Plan for class discussions where students may discuss why there can be different equations from the same bar model.
Several studies have shown that students who can visualize a word problem through modeling increase their problem solving ability and accuracy. This has been particularly documented in Singapore and other high performing countries where bar modeling is used extensively across grades. Students are more likely to solve problems correctly when they incorporate bar model drawings. On difficult problems, students who have been able to easily generate equations with simple problems often find that bar model drawings are especially helpful in increasing accuracy as problems increase in difficulty or involve new concepts (Yeap, 2010, pp. 87-89).
TALK: Reflection and Discussion
- Are there particular types of word problems that your students solve more easily than others? What characterizes these problems?
- Identify some basic facts with which your students struggle. How could you incorporate those facts into word problems, and how might the use of the inverted-V model help?
- How do bar model drawings help extract and represent the mathematical components and numerical relationships of a word problem?
- With which type of word problems would you begin to show your students the use of bar model drawings?
DO: Action Plans
- Select several story problems from your curriculum, MCA sample test items, or the Forsten, Walker, or Yeap resources on bar model drawing. Practice creating a bar model for several problems. Compare your models with others in your grade level, team, or PLC group. Practice until you feel comfortable with various model drawings.
- Investigate the types of multiplication and division problems, and how bar models can be used with different types such as measurement and partitive division, arrays, equal groups, rates. The Carpenter resource may be helpful.
- Select some problems from your curriculum that are of a similar type. Which bar model would be helpful in solving this type of problem? Practice using the model yourself with several problems of this type. How will you introduce the model to your students?
- Identify some basic facts with which your students struggle. Craft some rich word problems utilizing these fact families. Introduce the inverted-V diagrams with the word problems to make sense of the information in the word problem, and discuss strategies for solving the problems.
- Initiate a "Word-Problem-of-the-Day". Students might want to keep problem solving notebooks. Begin with problems of a particular type, and show students how to use a bar model to represent the information in a problem. Cluster several problems of a given type during the week. What improvements do you see in student selection of appropriate equations, accuracy of solutions, and ability to estimate or justify their answers as they increase the use of bar models to solve the word problems? A quick way to disseminate the "Word-Problem-of-the-Day" is to duplicate the problem on each label on a sheet of address labels. Students can just peel off the daily problem, add it to their problem solving notebook or a sheet of paper and solve away.
- When your district is doing a curriculum materials review, advocate to include a criteria that requires the use of visual models in helping students make sense of mathematical problems.
- Watch some of the videos of students using models on the Powerful Practices CD (see Carpenter and Romberg in References and Resources Section).
References and Resources
Bruner, J. S. (1961). The act of discovery. Harvard Educational Review, 31, pp. 21-32, in Yeap, Ban Har. (2010). Bar modeling: A problem solving tool. Singapore: Marshall Cavendish Education.
Carpenter, T. P., Fennema, E., Franke, M. L., Levi, L. & Empson, S. B. (1999). Children's mathematics: Cognitively guided instruction. Portsmouth, NH: Heinemann. (Book and CD)
Carpenter, T. P. & Romberg. T. A. (2004). Modeling, generalization, and justification in mathematics cases, in Powerful practices in mathematics & science. Madison, WI: National Center for Improving Student Learning and Achievement in Mathematics and Science. www.wcer.wisc.edu/ncisla (Booklet and CD)
Dienes, Z. (undated). Zoltan Dienes' six-state theory of learning mathematics. Retrieved from http://www.zoltandienes.com
Forsten, C. (2009). Step-by-step model drawing: Solving math problems the Singapore way. Peterborough, NH: SDE: Crystal Spring Books. http://www.crystalspringsbooks.com
Hoven, J. & Garelick, B. (2007). Singapore math: Simple or complex? Educational Leadership, 65 (3), 28-31.
Leinwand, S. (2009). Accessible mathematics: 10 instructional shifts that raise student achievement. Portsmouth, NH: Heinemann.
Marzano, R. J., Norford, J. S., Paynter, D. E., Pickering, D. J., & Gaddy, B. B. (2001). A handbook for classroom instruction that works. Alexandria, VA: Association for Supervision and Curriculum Development.
Singapore Ministry of Education. (1997). Retrieved http://moe.gov.sg
Skemp, R. R. (January, 1993). "Theoretical foundations of problem solving: A position paper." University of Warwick. Retrieved from http://www.grahamtall.co.uk/skemp/sail/theops.html
Walker, L. (2010). Model drawing for challenging word problems: Finding solutions the Singapore way. Peterborough, NH: SDE: Crystal Spring Books. http://www.crystalspringsbooks.com
Yeap, B. H. (2010). Bar modeling: A problem solving tool. Singapore: Marshall Cavendish Education. http://www.singaporemath.com | http://scimathmn.org/stemtc/resources/mathematics-best-practices/modeling-word-problems | 13 |
50 | Sea ice is frozen seawater that floats on the ocean surface. It forms in both the Arctic and the Antarctic in each hemisphere’s winter, and it retreats, but does not completely disappear, in the summer.
The Importance of Sea Ice
Sea ice has a profound influence on the polar physical environment, including ocean circulation, weather, and regional climate. As ice crystals form, they expel salt, which increases the salinity of the underlying ocean waters. This cold, salty water is dense, and it can sink deep to the ocean floor, where it flows back toward the equator. The sea ice layer also restricts wind and wave action near coastlines, lessening coastal erosion and protecting ice shelves. And sea ice creates an insulating cap across the ocean surface, which reduces evaporation and prevents heat loss to the atmosphere from the ocean surface. As a result, ice-covered areas are colder and drier than they would be without ice.
Sea ice also has a fundamental role in polar ecosystems. When sea ice melts in the summer, it releases nutrients into the water, which stimulate the growth of phytoplankton, which are the base of the marine food web. As the ice melts, it exposes ocean water to sunlight, spurring photosynthesis in phytoplankton.When ice freezes, the underlying water gets saltier and sinks, mixing the water column and bringing nutrients to the surface. The ice itself is habitat for animals such as seals, Arctic foxes, polar bears, and penguins.
Sea ice’s influence on the Earth is not just regional; it’s global. The white surface of sea ice reflects far more sunlight back to space than ocean water does. (In scientific terms, ice has a high albedo.) Once sea ice begins to melt, a self-reinforcing cycle often begins. As more ice melts and exposes more dark water, the water absorbs more sunlight. The sun-warmed water then melts more ice. Over several years, this positive feedback cycle (the “ice-albedo feedback”) can influence global climate.
Sea ice plays many important roles in the Earth system, but influencing sea level is not one of them. Because it is already floating on the ocean surface, sea ice is already displacing its own weight. Melting sea ice won’t raise ocean level any more than melting ice cubes will cause a glass of iced tea to overflow.
The Sea Ice Life Cycle
When seawater begins to freeze, it forms tiny crystals just millimeters wide, called frazil. How the crystals coalesce into larger masses of ice depends on whether the seas are calm or rough. In calm seas, the crystals form thin sheets of ice, nilas, so smooth they have an oily or greasy appearance. These wafer-thin sheets of ice slide over each other forming rafts of thicker ice. In rough seas, ice crystals converge into slushy pancakes. These pancakes can slide over each other to form smooth rafts, or they can collide into each other, creating ridges on the surface and keels on the bottom.
Some sea ice is fast ice that holds fast to a coastline or the sea floor, and some sea ice is pack ice that drifts with winds and currents. Because pack ice is dynamic, pieces of ice can collide and form much thicker ice. Leads—narrow, linear openings in the ice ranging in size from meters to kilometers—continually form and disappear.
Larger and more persistent openings, polynyas, are sustained by upwelling currents of warm water or steady winds that blow the sea ice away from a spot as quickly as it forms. Polynyas often occur along coastlines where offshore winds maintain their presence.
As the water and air temperatures rise each summer, some sea ice melts. Because of differences in geography and climate, it’s normal for Antarctic sea ice to melt more completely in the summer than Arctic sea ice. Ice that escapes summer melting may last for years, often growing to a thickness of 2 to 4 meters (roughly 6.5 to 13 feet) or more in the Arctic.
For ice to thicken, the ocean must lose heat to the atmosphere. But the ice insulates the ocean like a blanket. Eventually, the ice gets so thick that no more heat can escape. Once the ice reaches this thickness—3 to 4 meters (10 to 13 feet)—further thickening isn’t possible except through collisions and ridge-building.
Ice that survives the summer melt season is called multi-year ice. Multi-year ice increasingly loses salt and hardens each year it survives the summer melt. In contrast to multi-year ice, first-year ice—ice that has grown just since the previous summer—is thinner, saltier, and more prone to melt in the subsequent summer.
Monitoring Sea Ice
Records assembled by Vikings showing the number of weeks per year that ice occurred along the north coast of Iceland date back to A.D. 870, but a more complete record exists since 1600. More extensive written records of Arctic sea ice date back to the mid-1700s. The earliest of those records relate to Northern Hemisphere shipping lanes, but records from that period are sparse. Air temperature records dating back to the 1880s can serve as a stand-in (proxy) for Arctic sea ice, but such temperature records were initially collected at only 11 locations. Russia’s Arctic and Antarctic Research Institute has compiled ice charts dating back to 1933. Today, scientists studying Arctic sea ice trends can rely on a fairly comprehensive record dating back to 1953, using a combination of satellite records, shipping records, and ice charts from several countries.
In the Antarctic, data prior to the satellite record are even more sparse. To try to extend the historical record of Southern Hemisphere sea ice extent further back in time, scientists have been investigating two types of proxies for sea ice extent. One is records kept by Antarctic whalers since the 1930s that document the location of all whales caught. Because whales tend to congregate near the sea ice edge to feed, their locations could be a proxy for the ice extent. A second possible proxy is the presence of a phytoplankton-derived organic compound in Antarctic ice cores. Since phytoplankton grow most abundantly along the edges of the ice pack, the concentration of this sulfur-containing organic compound has been proposed as an indicator of how far the ice edge extended from the continent. Currently, however, only the satellite record is considered sufficiently reliable for studying Antarctic sea ice trends.
Since 1979, satellites have provided a continuous, nearly complete record of Earth’s sea ice. The most valuable data sets come from satellite sensors that observe microwaves emitted by the ice surface because, unlike visible light, the microwave energy radiated by the sea ice surface passes through clouds and can be measured even at night. The continuous sea ice record began with the Nimbus-7 Scanning Multichannel Microwave Radiometer (October 1978-August 1987) and continued with the Defense Meteorological Satellite Program Special Sensor Microwave Imager (1987 to present). The Advanced Microwave Scanning Radiometer–for EOS on NASA’s Aqua satellite has been observing sea ice since 2002.
Ice Area Versus Ice Extent
Satellite images of sea ice are made from observations of microwave energy radiated from the Earth’s surface. Because ocean water emits microwaves differently than sea ice, ice “looks” different to the satellite sensor. The observations are processed into digital picture elements, or pixels. Each pixel represents a square surface area on Earth, often 25 kilometers by 25 kilometers. Scientists estimate the amount of sea ice in each pixel.
There are two ways to express the total polar ice cover: ice area and ice extent. To estimate ice area, scientists calculate the percentage of sea ice in each pixel, multiply by the pixel area, and total the amounts. To estimate ice extent, scientists set a threshold percentage, and count every pixel meeting or exceeding that threshold as “ice-covered.” The National Snow and Ice Data Center, one of NASA’s Distributed Active Archive Centers, monitors sea ice extent using a threshold of 15 percent.
The threshold–based approach may seem less accurate, but it has the advantage of being more consistent. When scientists are analyzing satellite data, it is easier to say whether there is or isn’t at least 15 percent ice cover in a pixel than it is to say, for example, whether the ice cover is 70 percent or 75 percent. By reducing the uncertainty in the amount of ice, scientists can be more certain that changes in sea ice cover over time are real.
Arctic Sea Ice
Most Arctic sea ice occupies an ocean basin largely enclosed by land. Because there is no landmass at the North Pole, sea ice extends all the way to the pole, making the ice subject to the most extreme oscillations between wintertime darkness and summertime sunlight. Likewise, because the ocean basin is surrounded by land, ice has less freedom of movement to drift into lower latitudes and melt. Sea ice also forms in areas south of the Arctic Ocean in winter, including the Sea of Okhotsk, the Bering Sea, Baffin Bay, Hudson Bay, the Greenland Sea, and the Labrador Sea.
Arctic sea ice reaches its maximum extent each March and its minimum extent each September. This ice has historically ranged from roughly 16 million square kilometers (about 6 million square miles) each March to roughly 7 million square kilometers (about 2.7 million square miles) each September.
On time scales of years to decades, the dominant cause of atmospheric variability in the northern polar region is the Arctic Oscillation (AO). (There is still debate among scientists whether the North Atlantic Oscillation and the Arctic Oscillation are the same phenomenon or different but related patterns.) The Arctic Oscillation is an atmospheric seesaw in which atmospheric mass shifts between the polar regions and the mid-latitudes. The shifting can intensify, weaken, or shift the location of semi-permanent low and high-pressure systems. These changes influence the strength of the prevailing westerly winds and the track that storms tend to follow.
During the “positive” phase of the Arctic Oscillation, winds intensify, which increases the size of leads in the ice pack. The thin, young ice that forms in these leads is more likely to melt in the summer. The strong winds also tend to flush ice out of the Arctic through the Fram Strait. During “negative” phases of the oscillation, winds are weaker. Multiyear ice is less likely to be swept out of the Arctic basin and into the warmer waters of the Atlantic. The Arctic Oscillation was in a strong positive phase between 1989 and 1995, but since the late 1990s, it has been in a neutral state.
Current Status and Trends
In September 2008, Arctic sea ice dropped to its second-lowest extent since satellite records began in 1979: 4.67 million square kilometers (1.8 million square miles). Between 1979 and 2006, the annual average decline was 45,100 square kilometers per year, which is about 3.7 percent per decade. But the September minimum ice extent dropped by an average of nearly 57,000 square kilometers per year, which is just over 7.5 percent per decade. In every geographic area, in every month, and every season, current ice extent is lower today than it was during the 1980s and 1990s.
Natural variability and rising temperatures linked to global warming both appear to have played a role in this decline. The Arctic Oscillation’s strongly positive mode through the mid-1990s flushed thicker, older ice out of the Arctic, replacing multiyear ice with first-year ice that is more prone to melting. After the mid-1990s, the AO assumed a more neutral phase, but sea ice failed to recover. Instead, a pattern of steep Arctic sea ice decline began in 2002. The AO likely triggered a phase of accelerated melt that continued into the next decade thanks to unusually warm Arctic air temperatures.
|Year||Average Minimum Extent (million square kilometers)||Compared to 1979-2000 Average (million square kilometers)||Compared to 1979-2000 Average (percent)|
The sea ice minimum was especially dramatic in 2007, when Arctic sea ice extent broke all previous records by mid-August, more than a month before the end of melt season. Both the southern and northern routes through the Northwest Passage opened in mid-September. Ice also became particularly prone to melting in the Beaufort Gyre that summer. The Beaufort Gyre is a clockwise-moving ocean and ice circulation pattern in the Beaufort Sea, and starting in the late 1990s, ice began to melt in the southernmost stretch of the gyre. In the summer of 2007, sea ice retreat was especially pronounced in the region encompassing the Beaufort, Chukchi, East Siberian, Laptev, and Kara Seas.
Many global climate models predict that the Arctic will be ice free for at least part of the year before the end of the century. Some models predict an ice-free Arctic by mid-century, and some even sooner. Depending on how much Arctic sea ice continues to melt, the ice could become extremely vulnerable to natural variability. In the future, the ice might respond even more dramatically than it has in the past to natural cycles such as the Arctic Oscillation.
Impacts of Arctic Sea Ice Loss
Projected effects of declining sea ice include loss of habitat for seals and polar bears, as well as movement of polar bears onto land where bear-human encounters may increase. Indigenous peoples in the Arctic who rely on Arctic animals for food have already described changes in the health and numbers of polar bears.
As sea ice retreats from coastlines, wind-driven waves—combined with permafrost thaw—can lead to rapid coastal erosion. Alaskan and Siberian coastlines have already experienced coastal erosion.
Other potential impacts of Arctic sea ice loss include changed weather patterns: less precipitation in the American West, a weaker storm track that has shifted south over the Atlantic, or (according to one simulation) an intensified storm track.
Some researchers have hypothesized that melting sea ice could interfere with ocean circulation. In the Arctic, ocean circulation is driven by the sinking of dense, salty water. A cap of freshwater resulting from rapid, extensive sea ice melt could interfere with ocean circulation at high latitudes. Although a study published in 2005 suggested that the Atlantic meridional (north-south) overturning circulation had slowed by about 30 percent between 1957 and 2004, that conclusion was not based on comprehensive measurements. Subsequent modeling analyses indicated that the freshwater from melting sea ice was not likely to affect ocean circulation for decades.
Antarctic Sea Ice
The Antarctic is in some ways the precise opposite of the Arctic. The Arctic is an ocean basin surrounded by land, which means that the sea ice is corralled in the coldest, darkest part of the Northern Hemisphere. The Antarctic is land surrounded by ocean. Whereas Northern Hemisphere sea ice can extend to roughly 40 degrees north, Southern Hemisphere sea ice can extend to roughly 50 degrees south. Moreover, Antarctic sea ice does not extend southward to the pole; it can only fringe the continent.
Because of this geography, the Antarctic’s sea ice coverage is larger than the Arctic’s in winter, but smaller in the summer. Total Antarctic sea ice peaks in September—the end of Southern Hemisphere winter—historically rising to an extent of roughly 18 million square kilometers (about 6.9 million square miles). Ice extent reaches its minimum in February, when it dips to roughly 3 million square kilometers (about 1.2 million square miles).
To study patterns and trends in Antarctic sea ice, scientists commonly divide the sea ice pack into five sectors: the Weddell Sea, the Indian Ocean, the western Pacific Ocean, the Ross Sea, and the Bellingshausen/Amundsen seas. In some sectors, it is common for nearly all the sea ice to melt in the summer.
Antarctic sea ice is distributed around the entire fringe of the continent—a much broader area than the Arctic—and it is exposed to a broader range of land, ocean, and atmospheric influences. Because of the geographic and climatic diversity, Antarctic sea ice is more variable from year to year than Arctic sea ice. In addition, climate oscillations don’t affect ice in all sectors the same way, so it is more difficult to generalize the influence of climate patterns to the entire Southern Hemisphere ice pack.
Similar to the Arctic, the Antarctic experiences atmospheric oscillations and recurring weather patterns that influence sea ice extent. The primary variation in atmospheric circulation in the Antarctic is the Antarctic Oscillation, also called the Southern Annular Mode. Like the Arctic Oscillation, the Antarctic Oscillation involves a large-scale seesawing of atmospheric mass between the pole and the mid-latitudes. This oscillation can intensify, weaken, or shift the location of semi-permanent low- and high-pressure weather systems. These changes influence wind speeds, temperature, and the track that storms follow, any of which may influence sea ice extent.
For example, during positive phases of the Antarctic Oscillation, the prevailing westerly winds that circle Antarctica strengthen and move southward. The change in winds can change the way ice is distributed among the various sectors. In addition, the strengthening of the westerlies isolates much of the continent and tends to have an overall cooling effect, but it causes dramatic warming on the Antarctic Peninsula, as warmer air from over the oceans to the north is drawn southward. The winds may drive the ice away from the coast in some areas and toward the coast in others. Thus, the same climate influence may lessen sea ice in some sectors and increase it in others.
Changes in the El Niño-Southern Oscillation Index (ENSO), an oscillation of ocean temperatures and surface air pressure in the tropical Pacific, can lead to a delayed response (three to four seasons later) in Antarctic sea ice extent. In general, El Niño leads to more ice in the Weddell Sea and less ice on the other side of the Antarctic Peninsula, while La Niña causes the opposite conditions.
Another atmospheric pattern of natural variability that appears to influence Antarctic sea ice is the periodic strengthening and weakening of something that meteorologists call “zonal wave three,” or ZW3. This pattern alternately strengthens winds that blow cold air away from Antarctica (toward the equator) and winds that bring warmer air from lower latitudes toward Antarctica. When southerly winds intensify, more cold air is pushed to lower latitudes, and sea ice tends to increase. The effect is most apparent in the Ross and Weddell Seas and near the Amery Ice Shelf.
As in the Arctic, the interaction of natural cycles is complex, and researchers continue to study how these forces work together to control the Antarctic sea ice extent.
Current Status and Trends
In September 2008, Antarctic sea ice peaked at 18.5 million square kilometers (7.14 million square miles), slightly below the monthly average for 1979-2000. The February 2009 minimum of Antarctic sea ice was also slightly below average, at 2.9 million square kilometers (about 1.1 million square miles).
Since 1979, the total annual Antarctic sea ice extent has increased about 1 percent per decade. Compared to the Arctic, the signal has been a “noisy” one, with wide year–to-year fluctuations relative to the trend. The largest summer minimum in the satellite record occurred in February 2003. The largest winter maximum occurred in September 2006. The 2006 maximum was interesting because it followed a February minimum that was the third lowest on record.
Unlike the Arctic, where the downward trend is consistent in all sectors, in all months, and in all seasons, the Antarctic picture is more complex. Based on data from 1979-2006, the annual trend for four of the five individual sectors was a very small positive one, but only in the Ross Sea was the increase statistically significant (greater than the natural year-to-year variability). On the other hand, ice extent decreased in the Bellingshausen/Amundsen Sea sector during the same period.
The variability in Antarctic sea ice patterns in different sectors and from year to year makes it difficult to predict how Antarctic sea ice extent could change as global warming from greenhouse gases continues to warm the Earth. Climate models predict that Antarctic sea ice will respond more slowly than Arctic sea ice to warming, but as temperatures continue to rise, a long-term decline is expected.
You might wonder why the negative trends in Arctic sea ice seem to be more important to climate scientists than the smaller increase in Antarctic ice. Part of the reason, of course, is simply that the size of the increase is much smaller and slightly less certain than the Arctic trend. Another reason, however, is that the complete summertime disappearance of Northern Hemisphere ice would be a dramatic departure from what has occurred throughout the satellite record and likely throughout recorded history. In the Antarctic, however, sea ice already melts almost completely each summer. Even if it completely disappeared in the summer, the impact on the Earth’s climate system would likely be much smaller than a similar disappearance of Arctic ice.
You might also wonder how Antarctic sea ice could be increasing, even a little bit, while global warming from greenhouse gases is raising the planet’s average surface temperature. It’s a question scientists are asking, too. One reason may be that other atmospheric changes are softening the influence of global warming on Antarctica. For example, the ozone hole that develops over Antarctica each spring actually intensifies a perpetual vortex of winds that circles the South Pole. The stronger this vortex becomes, the more isolated the Antarctic atmosphere becomes from the rest of the planet. In addition, ocean circulation in the Antarctic behaves differently than it does in the Arctic. Around Antarctica, warm water moves downward in the ocean’s water column, making sea ice melt from warm water less likely.
Impacts of Antarctic Sea Ice Loss
A study on warming of West Antarctica since the 1957 geophysical year correlates widespread warming in West Antarctica and sea ice decline. Whether sea ice decline has led to warming temperatures on the continent, or whether both phenomena are caused by something else is not currently known.
One concern related to potential Antarctic sea ice loss is that sea ice may stabilize Antarctic ice shelves. Ice shelves are slabs of ice that partly rest on land and partly float. Ice shelves frequently calve icebergs, and this is a natural process, not necessarily a sign of climate change. But the rapid disintegration and retreat of an ice shelf (such as the collapse of the Larsen B shelf in 2001) is a warming signal. Although sea ice is too thin to physically buttress an ice shelf, intact sea ice may preserve cool conditions that stabilize an ice shelf because air currents passing over sea ice are cooler than air currents passing over open ocean. Sea ice may also suppress ocean waves that would otherwise flex the shelf and speed ice shelf breakup.
The interaction between sea ice loss and ice shelf retreat merits careful study because many ice shelves are fed by glaciers. When an ice shelf disintegrates, the glacier feeding it often accelerates. Because glacier acceleration introduces a new ice mass into the ocean, it can raise ocean level. So while sea ice melt does not directly lead to sea level rise, it could contribute to other processes that do, both in the Arctic and the Antarctic. Glacier acceleration has already been observed on the Antarctic Peninsula, although the accelerating glaciers in that region have so far had a negligible effect on ocean level.
Because of differences in geography and climate, the amount, location, and natural variability of sea ice in the Arctic and the Antarctic are different. Global warming and natural climate patterns may affect each hemisphere’s sea ice in different ways or at different rates. Within each hemisphere, sea ice can change substantially from day to day, month to month, and even over the course of a few years. Comparing conditions at only two points in time or examining trends over a short period is not sufficient to understand the impact of long-term climate change on sea ice. Scientists can only understand how sea ice is changing by comparing current conditions to long-term averages.
Since 1979, satellites have provided a consistent continuous record of sea ice. Through 2008, annual average sea ice extent in the Arctic fell by about 4.1 percent per decade relative to the 1979–2000 average. The amount of ice remaining at the end of summer declined even more dramatically—over 11.1 percent per decade. Declines are occurring in every geographic area, in every month, and every season. Natural variability and rising temperatures linked to global warming appear to have played a role in this decline. The Arctic may be ice-free in summer before the end of this century.
Antarctic sea ice trends are smaller and more complex. Through 2008, the total annual Antarctic sea ice extent increased about 1 percent per decade, but the trends were not consistent for all areas or all seasons. The variability in Antarctic sea ice patterns makes it harder for scientists to explain Antarctic sea ice trends and to predict how Southern Hemisphere sea ice may change as greenhouse gases continue to warm the Earth. Climate models do predict that Antarctic sea ice will respond more slowly than Arctic sea ice to warming, but as temperatures continue to rise, a long-term decline is expected.
- Cavalieri, D. J., and C. L. Parkinson (2008). Antarctic sea ice variability and trends, 1979–2006, Journal of Geophysical Research Oceans. 113, C07004.
- Comiso, J.C., Parkinson, C.L., Gersten, R., Stock, L. (2008). Accelerated decline in the Arctic sea ice cover. Geophysical Research Letters. 35, L01703.
- de la Mare, W.K. (1997). Abrupt mid-twentieth-century decline in Antarctic sea-ice extent from whaling records. Nature. 389, 57-60.
- Goosse, H., Lefebvre, W., de Montety, A., Crespin, E., and Orsi, A.H. (2008). Consistent past half-century trends in the atmosphere, the sea ice and the ocean at high southern latitudes. Climate Dynamics.
- Intergovernmental Panel on Climate Change. (2007). Summary for Policymakers. In: Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, M.L. Parry, O.F. Canziani, J.P. Palutikof, P.J. van der Linden, and C.E. Hanson, Eds., Cambridge, UK: Cambridge University Press, pp. 7-22.
- Mahoney, A.R., Barry, R.G., Smolyanitsky, V., Fetterer, F. (2008). Observed sea ice extent in the Russian Arctic, 1933–2006. Journal of Geophysical Research. 113, C11005.
- Meier, W.N., Stroeve, J., Fetterer, F. (2007). Whither Arctic sea ice? A clear signal of decline regionally, seasonally, and extending beyond the satellite record. Annals of Glaciology. 46(1), 428-434.
- National Snow and Ice Data Center:
- All About Sea Ice. Accessed March 6, 2009.
- Arctic Sea Ice Down to Second-Lowest Extent; Likely Record-Low Volume. Accessed March 6, 2009.
- Arctic Sea Ice News and Analysis. Accessed March 6, 2009.
- Frequently Asked Questions about Sea Ice. Accessed February 4, 2009.
- State of the Cryosphere. Accessed February 4, 2009.
- Overland, J.E., Spillane, M.C., Percival, D.B., Wang, M., Mofjeld, H.O. (2004). Seasonal and regional variation of Pan-Arctic surface air temperature over the instrumental record. American Meteorological Society. 17(17), 3263-3282.
- Parkinson, C.L. (1997). Earth from Above. University Science Books. Sausalito, California.
- Parkinson, C.L. (2000). Recent trend reversals in arctic sea ice extents: possible connection to the north Atlantic oscillation. Polar Geography. 31(1-2), 3-14.
- Parkinson, C.L., Cavalieri, D.J. (2008). Arctic sea ice variability and trends, 1979-2006. Journal of Geophysical Research. 113, C07003.
- Raphael, M.N. (2007). The influence of atmospheric zonal wave three on Antarctic sea ice variability. Journal of Geophysical Research. 112, D12112.
- Scambos, T.A., Bohlander, J.A., Shuman, C.A., Skvarca, P. (2004). Glacier acceleration and thinning after ice shelf collapse in the Larsen B embayment, Antarctica. Geophysical Research Letters. 31, L18402.
- Schiermeier, Q. (2006). A sea change. Nature. 439, 256-260.
- Serreze, M.C., Holland, M.K., Stroeve, J. (2007). Perspectives on the Arctic’s shrinking sea-ice cover. Science. 315(5818), 1533-1536.
- Steig, E.J., Schneider, D.P., Rutherford, S.D., Mann, M.E., Comiso, J.C., Shindell, D.T. (2009). Warming of the Antarctic ice-sheet surface since the 1957 International Geophysical Year. Nature. 457, 459-463.
- Yuan, X. (2004). ENSO-related impacts on Antarctic sea ice: a synthesis of phenomenon and mechanisms. Antarctic Science. 16(4), 415-425. | http://earthobservatory.nasa.gov/Features/SeaIce/printall.php | 13 |
78 | Science Fair Project Encyclopedia
The most famous ruler-and-compass problems have been proven impossible, in several cases by the results of Galois theory. In spite of these impossibility proofs, some mathematical novices persist in trying to solve these problems. Many of them fail to understand that many of these problems are trivially solvable provided that other geometric transformations are allowed: for example, squaring the circle is possible using geometric constructions, but not possible using ruler and compass alone.
Ruler and compass
The "ruler" and "compass" of ruler-and-compass constructions is an idealization of rulers and compasses in the real world:
- The ruler is infinitely long, but it has no markings on it and has only one edge (thus making it a straightedge instead of what we usually think of as a ruler). The only thing you can use it for is to draw a line segment between two points, or to extend an existing line.
- The compass can be opened arbitrarily wide, but (unlike most real compasses) it also has no markings on it. It can only be opened to widths you have already constructed.
Each construction must be exact. "Eyeballing" it (essentially looking at the construction and guessing at its accuracy, or using some form of measurement, such as the units of measure on a ruler) and getting close does not count as a solution.
Stated this way, ruler and compass constructions appear to be a parlor game, rather than a serious practical problem. Figuring out how to do any particular construction is an interesting puzzle, but the persistent interest is in the problems derived from what you can't do this way.
The three classical unsolved construction problems were:
- Squaring the circle: Drawing a square the same area as a given circle.
- Doubling the cube: Drawing a cube with twice the volume as a given cube.
- Trisecting the angle: Dividing a given angle into three smaller angles all of the same size.
For 2000 years people tried to find constructions within the limits set above, and failed. All three have now been proven under mathematical rules to be impossible.
The straight edge and compass give you the ability to produce ratios which are solutions to quadratic equations, but doubling the cube and trisecting the angle require ratios which are the solution to cubic equations, while squaring the circle requires a transcendental ratio. Curiously, origami (i.e. paper folding without any equipment) is more powerful and can be used to solve cubic equations, and thus solve two of the classical problems.
Constructible points and lengths
How do you prove something impossible? There are many different ways, but this particular problem we carefully demarcate the limit of the possible, and show that to solve these problems you must transgress that limit.
Using a ruler and compass, you can impose coordinates on the plane. Draw two points, and draw the line through them. Call that the x-axis, and define the length between the two points to be one. One construction that you can do is draw perpendiculars, so draw a perpendicular to your x-axis, and call it your y-axis. We now have a Cartesian coordinate system on the plane.
You can identify a point (x,y) in the Euclidean plane with the complex number x + y i. In ruler and compass construction, one starts with a line segment of length one. If one can construct a given point on the complex plane, then one says that the point is constructible. By standard constructions of Euclidean geometry one can construct the complex numbers in the form x.+ yi with x and y rational numbers. More generally, using the same constructions, one can, given complex numbers a and b, construct a + b, a - b, a * b, and a / b. This shows that the constructible points form a field, which one treats as a subfield of the complex numbers. Moreover, one can show that the given a constructible length one can construct its complex conjugate and square root.
The only way to construct points is as the intersection of two lines, of a line and a circle, or of two circles. Using the equations for lines and circles, one can show that the points at which they intersect lie in a quadratic extension of the smallest field F containing two points on the line, the center of the circle, and the radius of the circle. That is, they are of the form x + y √ k, where x, y, and k are in F.
Since the field of constructible points is closed under square roots, it contains all points that can be obtained by a finite sequence of quadratic extensions of the field of complex numbers with rational coefficients. By the above paragraph, one can show that any constructible point can be obtained by such a sequence of extensions. As a corollary of this, one finds that the degree of the minimal polynomial for a constructible point (and therefore of any constructible length) has degree a power of 2. In particular, any constructible point (or length) is an algebraic number.
Squaring the circle
The most famous of these problems, "squaring the circle", involves constructing a square with the same area as a given circle using only ruler and compass.
Squaring the circle has been proved impossible, as it involves generating a transcendental ratio, namely 1:√π. Only algebraic ratios can be constructed with ruler and compass alone. The phrase "squaring the circle" is often used to mean "doing the impossible" for this reason.
Without the constraint of requiring solution by ruler and compass alone, the problem is easily soluble by a wide variety of geometric and algebraic means, and has been solved many times in antiquity.
Doubling the cube
Doubling the cube: using only ruler and compass, construct the side of a cube that has twice the volume of a cube with a given side. This is impossible because the cube root of 2, though algebraic, cannot be computed from integers by addition, subtraction, multiplication, division, and taking square roots. This follows because its minimal polynomial over the rationals has degree 3.
Angle trisection: using only ruler and compass, construct an angle that is one-third of a given arbitrary angle. This requires taking the cube root of an arbitrary complex number with absolute value 1 and is likewise impossible.
Specifically, one can show that the angle of 60° cannot be trisected. If it could be trisected, then the minimal polynomial of cos(20° ) must have degree a power of two.
Using the trigonometric identity cos(3α) = 4cos³(α) - 3cos(α), one sees that, letting cos 20° = y, that 8y³ - 6y - 1 = 0, so, substituting x = 2y, x³ - 3x - 1 = 0. The minimal polynomial for x is a factor of this, but if it were not irreducible, then it would have a rational root which, by the rational root theorem, must be 1 or -1, which are clearly not roots. Therefore the degree for the minimal polynomial for cos 20° is of degree three, so cos 20° is not constructible and 60° cannot be trisected.
In the late 1980s, a television program was aired (on PBS) on the subject of optimum shapes for folded solar panels on satellites using the art of Origami as a basis for research. The lecturer, showing his folded conception for a deployable panel, also easily demonstrated trisecting an angle with a few folds of paper, thus making 'physical' construction of an 'unsolvable' problem possible.
Constructing regular polygons
Some regular polygons (e.g. a pentagon) are easy to construct with ruler and compass; others are not. This led to the question being posed: is it possible to construct all regular polygons with ruler and compass?
Carl Friedrich Gauss in 1796 showed that a regular n-sided polygon can be constructed with ruler and compass if the odd prime factors of n are distinct Fermat primes. Gauss conjectured that this condition was also necessary, but he offered no proof of this fact, which was proved by Pierre Wantzel in (1836). See constructible polygon.
Constructing with only ruler or only compass
It is possible (according to the Mohr-Mascheroni theorem) to construct anything with just a compass that can be constructed with ruler and compass. It is impossible to take a square root with just a ruler, so some things cannot be constructed with a ruler that can be constructed with a compass; but (by the Poncelet-Steiner theorem) given a single circle and its center, they can be constructed.
- Simon Plouffe.The Computation of Certain Numbers Using a Ruler and Compass. Journal of Integer Sequences, Vol. 1 (1998), Article 98.1.3
- Constructible polygon
- Interactive geometry software allows you to create and manipulate Ruler-and-compass constructions.
- Mohr-Mascheroni theorem
- Poncelet-Steiner theorem
- Online ruler-and-compass construction tool
- Squaring the circle
- Impossibility of squaring the circle
- http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Doubling_the_cube.html Doubling the cube]
- Angle trisection
- Regular polygon constructions
- Simon Plouffe's use of ruler and compass as a computer
- Why Gauss could not have proved necessity of constructible regular polygons
- Construction with the Compass Only
- Angle Trisection by Archimedes (requires Java)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Ruler-and-compass_construction | 13 |
67 | Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability. As the laws of probability derived by Cox's theorem are applicable to any proposition, logical probability is a type of Bayesian probability. Other forms of Bayesianism, such as the subjective interpretation, are given other justifications.
Cox's assumptions
Cox wanted his system to satisfy the following conditions:
- Divisibility and comparability – The plausibility of a statement is a real number and is dependent on information we have related to the statement.
- Common sense – Plausibilities should vary sensibly with the assessment of plausibilities in the model.
- Consistency – If the plausibility of a statement can be derived in many ways, all the results must be equal.
The postulates as originally stated by Cox were not mathematically rigorous (although better than the informal description above), e.g., as noted by Halpern. However it appears to be possible to augment them with various mathematical assumptions made either implicitly or explicitly by Cox to produce a valid proof.
Cox's axioms and functional equations are:
- The plausibility of a proposition determines the plausibility of the proposition's negation; either decreases as the other increases. Because "a double negative is an affirmative", this becomes a functional equation
- saying that the function f that maps the probability of a proposition to the probability of the proposition's negation is an involution, i.e., it is its own inverse.
- The plausibility of the conjunction [A & B] of two propositions A, B, depends only on the plausibility of B and that of A given that B is true. (From this Cox eventually infers that conjunction of plausibilities is associative, and then that it may as well be ordinary multiplication of real numbers.) Because of the associative nature of the "and" operation in propositional logic, this becomes a functional equation saying that the function g such that
- is an associative binary operation. All strictly increasing associative binary operations on the real numbers are isomorphic to multiplication of numbers in the interval [0, 1]. This function therefore may be taken to be multiplication.
- Suppose [A & B] is equivalent to [C & D]. If we acquire new information A and then acquire further new information B, and update all probabilities each time, the updated probabilities will be the same as if we had first acquired new information C and then acquired further new information D. In view of the fact that multiplication of probabilities can be taken to be ordinary multiplication of real numbers, this becomes a functional equation
- where f is as above.
Cox's theorem implies that any plausibility model that meets the postulates is equivalent to the subjective probability model, i.e., can be converted to the probability model by rescaling.
Implications of Cox's postulates
The laws of probability derivable from these postulates are the following. Here w(A|B) is the "plausibility" of the proposition A given B, and m is some positive number. Further, AC represents the absolute complement of A.
- Certainty is represented by w(A|B) = 1.
- wm(A|B) + wm(AC|B) = 1
- w(A, B|C) = w(A|C) w(B|A, C) = w(B|C) w(A|B, C).
It is important to note that the postulates imply only these general properties. These are equivalent to the usual laws of probability assuming some conventions, namely that the scale of measurement is from zero to one, and the plausibility function, conventionally denoted P or Pr, is equal to wm. (We could have equivalently chosen to measure probabilities from one to infinity, with infinity representing certain falsehood.) With these conventions, we obtain the laws of probability in a more familiar form:
- Certain truth is represented by Pr(A|B) = 1, and certain falsehood by Pr(A|B) = 0.
- Pr(A|B) + Pr(AC|B) = 1
- Pr(A, B|C) = Pr(A|C) Pr(B|A, C) = Pr(B|C) Pr(A|B, C).
Rule 2 is a rule for negation, and rule 3 is a rule for conjunction. Given that any proposition containing conjunction, disjunction, and negation can be equivalently rephrased using conjunction and negation alone (the conjunctive normal form), we can now handle any compound proposition.
The laws thus derived yield finite additivity of probability, but not countable additivity. The measure-theoretic formulation of Kolmogorov assumes that a probability measure is countably additive. This slightly stronger condition is necessary for the proof of certain theorems.
Interpretation and further discussion
Cox's theorem has come to be used as one of the justifications for the use of Bayesian probability theory. For example, in Jaynes it is discussed in detail in chapters 1 and 2 and is a cornerstone for the rest of the book. Probability is interpreted as a formal system of logic, the natural extension of Aristotelian logic (in which every statement is either true or false) into the realm of reasoning in the presence of uncertainty.
It has been debated to what degree the theorem excludes alternative models for reasoning about uncertainty. For example, if certain "unintuitive" mathematical assumptions were dropped then alternatives could be devised, e.g., an example provided by Halpern. However Arnborg and Sjödin suggest additional "common sense" postulates, which would allow the assumptions to be relaxed in some cases while still ruling out the Halpern example. Other approaches were devised by Hardy or Dupré and Tipler.
The original formulation of Cox's theorem is in, which is extended with additional results and more discussion in. Jaynes cites Abel for the first known use of the associativity functional equation. Aczél provides a long proof of the "associativity equation" (pages 256-267). Jaynes (p27) reproduces the shorter proof by Cox in which differentiability is assumed.
See also
- Stefan Arnborg and Gunnar Sjödin, On the foundations of Bayesianism, Preprint: Nada, KTH (1999) — ftp://ftp.nada.kth.se/pub/documents/Theory/Stefan-Arnborg/06arnborg.ps — ftp://ftp.nada.kth.se/pub/documents/Theory/Stefan-Arnborg/06arnborg.pdf
- Stefan Arnborg and Gunnar Sjödin, A note on the foundations of Bayesianism, Preprint: Nada, KTH (2000a) — ftp://ftp.nada.kth.se/pub/documents/Theory/Stefan-Arnborg/fobshle.ps — ftp://ftp.nada.kth.se/pub/documents/Theory/Stefan-Arnborg/fobshle.pdf
- Stefan Arnborg and Gunnar Sjödin, "Bayes rules in finite models," in European Conference on Artificial Intelligence, Berlin, (2000b) — ftp://ftp.nada.kth.se/pub/documents/Theory/Stefan-Arnborg/fobc1.ps — ftp://ftp.nada.kth.se/pub/documents/Theory/Stefan-Arnborg/fobc1.pdf
- Joseph Y. Halpern, "A counterexample to theorems of Cox and Fine," Journal of AI research, 10, 67–85 (1999) — http://www.cs.washington.edu/research/jair/abstracts/halpern99a.html
- Joseph Y. Halpern, "Technical Addendum, Cox's theorem Revisited," Journal of AI research, 11, 429–435 (1999) — http://www.cs.washington.edu/research/jair/abstracts/halpern99b.html
- Edwin Thompson Jaynes, Probability Theory: The Logic of Science, Cambridge University Press (2003). — preprint version (1996) at http://omega.albany.edu:8008/JaynesBook.html; Chapters 1 to 3 of published version at http://bayes.wustl.edu/etj/prob/book.pdf
- Michael Hardy, "Scaled Boolean algebras", Advances in Applied Mathematics, August 2002, pages 243–292 (or preprint); Hardy has said, "I assert there that I think Cox's assumptions are too strong, although I don't really say why. I do say what I would replace them with." (The quote is from a Wikipedia discussion page, not from the article.)
- Dupré, Maurice J., Tipler, Frank T. New Axioms For Bayesian Probability, Bayesian Analysis (2009), Number 3, pp. 599-606
- R. T. Cox, "Probability, Frequency, and Reasonable Expectation," Am. Jour. Phys., 14, 1–13, (1946).
- R. T. Cox, The Algebra of Probable Inference, Johns Hopkins University Press, Baltimore, MD, (1961).
- Niels Henrik Abel "Untersuchung der Functionen zweier unabhängig veränderlichen Gröszen x und y, wie f(x, y), welche die Eigenschaft haben, dasz f[z, f(x,y)] eine symmetrische Function von z, x und y ist.", Jour. Reine u. angew. Math. (Crelle's Jour.), 1, 11–15, (1826).
- # János Aczél, Lectures on Functional Equations and their Applications, Academic Press, New York, (1966).
- Terrence L. Fine, Theories of Probability; An examination of foundations, Academic Press, New York, (1973).
- Kevin S. Van Horn, "Constructing a logic of plausible inference: a guide to Cox’s theorem", International Journal of Approximate Reasoning, Volume 34, Issue 1, September 2003, Pages 3–24. (Or through Citeseer page.) | http://en.wikipedia.org/wiki/Cox's_theorem | 13 |
236 | In mathematics and the arts, two quantities are in the golden ratio if the ratio between the sum of those quantities and the larger one is the same as the ratio between the larger one and the smaller. The golden ratio is approximately 1.6180339887.
At least since the Renaissance, many artists and architects have proportioned their works to approximate the golden ratio—especially in the form of the golden rectangle, in which the ratio of the longer side to the shorter is the golden ratio—believing this proportion to be aesthetically pleasing. Mathematicians have studied the golden ratio because of its unique and interesting properties.
The golden ratio can be expressed as a mathematical constant, usually denoted by the Greek letter (phi). The figure of a golden section illustrates the geometric relationship that defines this constant. Expressed algebraically:
Other names frequently used for or closely related to the golden ratio are golden section (Latin: sectio aurea), golden mean, golden number, and the Greek letter phi (). Other terms encountered include extreme and mean ratio, medial section, divine proportion, divine section (Latin: sectio divina), golden proportion, golden cut, and mean of Phidias.
| List of numbers|
γ - ζ(3) - √2 - √3 - √5 - φ - α - e - π - δ
Two quantities (positive numbers) a and b are said to be in the golden ratio if
This equation unambiguously defines
The right equation shows that , which can be substituted in the left part, giving
Cancelling b yields
Multiplying both sides by and rearranging terms leads to:
The only positive solution to this quadratic equation is
Ancient Greek mathematicians first studied what we now call the golden ratio because of its frequent appearance in geometry. The ratio is important in the geometry of regular pentagrams and pentagons. The Greeks usually attributed discovery of the ratio to Pythagoras or his followers. The regular pentagram, which has a regular pentagon inscribed within it, was the Pythagoreans' symbol.
Euclid's Elements (Greek: Στοιχεῖα) provides the first known written definition of what is now called the golden ratio: "A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the less." Euclid explains a construction for cutting (sectioning) a line "in extreme and mean ratio", i.e. the golden ratio. Throughout the Elements, several propositions (theorems in modern terminology) and their proofs employ the golden ratio. Some of these propositions show that the golden ratio is an irrational number.
The name "extreme and mean ratio" was the principal term used from the 3rd century BC until about the 18th century.
The modern history of the golden ratio starts with Luca Pacioli's Divina Proportione of 1509, which captured the imagination of artists, architects, scientists, and mystics with the properties, mathematical and otherwise, of the golden ratio.
The first known approximation of the (inverse) golden ratio by a decimal fraction, stated as "about 0.6180340," was written in 1597 by Prof. Michael Maestlin of the University of Tübingen in a letter to his former student Johannes Kepler.
Since the twentieth century, the golden ratio has been represented by the Greek letter (phi, after Phidias, a sculptor who is said to have employed it) or less commonly by (tau, the first letter of the ancient Greek root τομή—meaning cut).
Timeline according to Priya Hemenway.
The first and most influential of these was De Divina Proportione by Luca Pacioli, a three-volume work published in 1509. Pacioli, a Franciscan friar, was known mostly as a mathematician, but he was also trained and keenly interested in art. De Divina Proportione explored the mathematics of the golden ratio. Though it is often said that Pacioli advocated the golden ratio's application to yield pleasing, harmonious proportions, Livio points out that that interpretation has been traced to an error in 1799, and that Pacioli actually advocated the Vitruvian system of rational proportions. Pacioli also saw Catholic religious significance in the ratio, which led to his work's title. Containing illustrations of regular solids by Leonardo Da Vinci, Pacioli's longtime friend and collaborator, De Divina Proportione was a major influence on generations of artists and architects alike.
Some studies of the Acropolis, including the Parthenon, conclude that many of its proportions approximate the golden ratio. The Parthenon's facade as well as elements of its facade and elsewhere can be circumscribed by golden rectangles. To the extent that classical buildings or their elements are proportioned according to the golden ratio, this might indicate that their architects were aware of the golden ratio and consciously employed it in their designs. Alternatively, it is possible that the architects used their own sense of good proportion, and that this led to some proportions that closely approximate the golden ratio. On the other hand, such retrospective analyses can always be questioned on the ground that the investigator chooses the points from which measurements are made or where to superimpose golden rectangles, and that these choices affect the proportions observed.
Some scholars deny that the Greeks had any aesthetic association with golden ratio. For example, Midhat J. Gazalé says, "It was not until Euclid, however, that the golden ratio's mathematical properties were studied. In the Elements (308 B.C.) the Greek mathematician merely regarded that number as an interesting irrational number, in connection with the middle and extreme ratios. Its occurrence in regular pentagons and decagons was duly observed, as well as in the dodecahedron (a regular polyhedron whose twelve faces are regular pentagons). It is indeed exemplary that the great Euclid, contrary to generations of mystics who followed, would soberly treat that number for what it is, without attaching to it other than its factual properties. And Keith Devlin says, "Certainly, the oft repeated assertion that the Parthenon in Athens is based on the golden ratio is not supported by actual measurements. In fact, the entire story about the Greeks and golden ratio seems to be without foundation. The one thing we know for sure is that Euclid, in his famous textbook Elements, written around 300 B.C., showed how to calculate its value. Near-contemporary sources like Vitruvius exclusively discuss proportions that can be expressed in whole numbers, i.e. commensurate as opposed to irrational proportions.
A geometrical analysis of the Great Mosque of Kairouan reveals a consistent application of the golden ratio throughout the design, according to Boussora and Mazouz. It is found in the overall proportion of the plan and in the dimensioning of the prayer space, the court, and the minaret. Boussora and Mazouz also examined earlier archaeological theories about the mosque, and demonstrate the geometric constructions based on the golden ratio by applying these constructions to the plan of the mosque to test their hypothesis.
The Swiss architect Le Corbusier, famous for his contributions to the modern international style, centered his design philosophy on systems of harmony and proportion. Le Corbusier's faith in the mathematical order of the universe was closely bound to the golden ratio and the Fibonacci series, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages and the learned.
Le Corbusier explicitly used the golden ratio in his Modulor system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's "Vitruvian Man", the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture. In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. He took Leonardo's suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system. Le Corbusier's 1927 Villa Stein in Garches exemplified the Modulor system's application. The villa's rectangular ground plan, elevation, and inner structure closely approximate golden rectangles.
Another Swiss architect, Mario Botta, bases many of his designs on geometric figures. Several private houses he designed in Switzerland are composed of squares and circles, cubes and cylinders. In a house he designed in Origlio, the golden ratio is the proportion between the central section and the side sections of the house.
Leonardo da Vinci's illustrations in De Divina Proportione (On the Divine Proportion) and his views that some bodily proportions exhibit the golden ratio have led some scholars to speculate that he incorporated the golden ratio in his own paintings. Some suggest that his Mona Lisa, for example, employs the golden ratio in its geometric equivalents. Whether Leonardo proportioned his paintings according to the golden ratio has been the subject of intense debate. The secretive Leonardo seldom disclosed the bases of his art, and retrospective analysis of the proportions in his paintings can never be conclusive.
Salvador Dalí explicitly used the golden ratio in his masterpiece, The Sacrament of the Last Supper. The dimensions of the canvas are a golden rectangle. A huge dodecahedron, with edges in golden ratio to one another, is suspended above and behind Jesus and dominates the composition.
Mondrian used the golden section extensively in his geometrical paintings.
Side Note: Interestingly, a statistical study on 565 works of art of different great painters, performed in 1999, found that these artists had not used the golden ratio in the size of their canvases. The study concluded that the average ratio of the two sides of the paintings studied is 1.34, with averages for individual artists ranging from 1.04 (Goya) to 1.46 (Bellini).
According to Jan Tschichold, "There was a time when deviations from the truly beautiful page proportions 2:3, 1:√3, and the Golden Section were rare. Many books produced between 1550 and 1770 show these proportions exactly, to within half a millimetre."
Studies by psychologists, starting with Fechner, have been devised to test the idea that the golden ratio plays a role in human perception of beauty. While Fechner found a preference for rectangle ratios centered on the golden ratio, later attempts to carefully test such a hypothesis have been, at best, inconclusive.
James Tenney reconceived his piece For Ann (rising), which consists of up to twelve computer-generated upwardly glissandoing tones (see Shepard tone), as having each tone start so it is the golden ratio (in between an equal tempered minor and major sixth) below the previous tone, so that the combination tones produced by all consecutive tones are a lower or higher pitch already, or soon to be, produced.
Ernő Lendvai analyzes Béla Bartók's works as being based on two opposing systems, that of the golden ratio and the acoustic scale, though other music scholars reject that analysis. In Bartok's Music for Strings, Percussion and Celesta the xylophone progression occurs at the intervals 1:2:3:5:8:5:3:2:1. French composer Erik Satie used the golden ratio in several of his pieces, including Sonneries de la Rose+Croix. His use of the ratio gave his music an otherworldly symmetry.
The golden ratio is also apparent in the organisation of the sections in the music of Debussy's Image, Reflections in Water, in which "the sequence of keys is marked out by the intervals 34, 21, 13 and 8, and the main climax sits at the phi position."
The musicologist Roy Howat has observed that the formal boundaries of La Mer correspond exactly to the golden section. Trezise finds the intrinsic evidence "remarkable," but cautions that no written or reported evidence suggests that Debussy consciously sought such proportions. Also, many works of Chopin, mainly Etudes (studies) and Nocturnes, are formally based on the golden ratio. This results in the biggest climax of both musical expression and technical difficulty after about 2/3 of the piece.
This Binary Universe, an experimental album by Brian Transeau (aka BT), includes a track entitled "1.618" in homage to the golden ratio. The track features musical versions of the ratio and the accompanying video displays various animated versions of the golden mean.
Pearl Drums positions the air vents on its Masters Premium models based on the golden ratio. The company claims that this arrangement improves bass response and has applied for a patent on this innovation.
In the opinion of author Leon Harkleroad, "Some of the most misguided attempts to link music and mathematics have involved Fibonacci numbers and the related golden ratio.
The negative root of the quadratic equation for φ (the "conjugate root") is . The absolute value of this quantity (≈ 0.618) corresponds to the length ratio taken in reverse order (shorter segment length over longer segment length, b/a), and is sometimes referred to as the golden ratio conjugate. It is denoted here by the capital Phi ():
Alternatively, can be expressed as
This illustrates the unique property of the golden ratio among positive numbers, that
or its inverse:
Irrationality of the golden ratio is immediate from the properties of the Euclidean algorithm to compute the greatest common divisor of a pair of positive integers.
In its original form, that algorithm repeatedly replaces that larger of the two numbers by their difference, until the numbers become equal, which must happen after finitely many steps. The only property needed will be that this number of steps for an initial pair (a,b) only depends on the ratio a : b. This is because, if both numbers are multiplied by a common positive factor λ so as to obtain another pair (λa,λb) with the same ratio, then this factor does not affect the comparison: say if a > b then also λa > λb; moreover the difference λa − λb=λ(a − b) is also multiplied by the same factor. Therefore comparing the algorithm applied to (a,b) and to (λa,λb) after one step, the pairs will still have the same ratio, and this relation will persist until the algorithm terminates.
Now suppose the golden ratio were a rational number, that is, one has positive integers a, b with
Another way to express this argument is as follows: the formulation of the Euclidean algorithm does not require the operands to be integer numbers; they could be real numbers or indeed any pair of quantities that can be compared and subtracted. For instance one could operate on lengths in a geometric figure (without requiring a unit of measure to express everything in numbers); in fact this is a point of view already familiar to the ancient Greeks. In this setting however there is no longer a guarantee that the algorithm will terminate as it will for integer numbers (which cannot descend further than the number 1). One does retain the property mentioned above that pairs having the same ratio will behave similarly throughout the algorithm (even if it should go on forever), which is directly related to the algorithm not requiring any unit of measure. Now examples can be given of ratios for which this form of the algorithm will never terminate, and the golden ratio is the simplest possible such example: by construction a pair with the golden ratio will give another pair with the same ratio after just one step, so that it will go on similarly forever.
Note that apart from the Euclidean algorithm, this proof does not require any number theoretic facts, such as prime factorisation or even the fact that any fraction can be expressed in lowest terms.
Multiplying both sides by leads to:
Subtracting ab from both sides and factoring out a gives:
Finally, dividing both sides by yields:
This last equation indicates that could be further reduced to , where is still positive, which is an equivalent fraction with smaller numerator and denominator. But since was already given in lowest terms, this is a contradiction. Thus this number cannot be so written, and is therefore irrational.
and its reciprocal:
The equation likewise produces the continued square root form:
These correspond to the fact that the length of the diagonal of a regular pentagon is φ times the length of its side, and similar relations in a pentagram.
If x agrees with to n decimal places, then agrees with it to 2n decimal places.
An equation derived in 1994 connects the golden ratio to the Number of the Beast (666):
There is no known general algorithm to arrange a given number of nodes evenly on a sphere, for any of several definitions of even distribution (see, for example, Thomson problem). However, a useful approximation results from dividing the sphere into parallel bands of equal area and placing one node in each band at longitudes spaced by a golden section of the circle, i.e. 360°/φ ≅ 222.5°. This method was used to arrange the 1500 mirrors of the student-participatory satellite Starshine-3..
If angle BCX = α, then XCA = α because of the bisection, and CAB = α because of the similar triangles; ABC = 2α from the original isosceles symmetry, and BXC = 2α by similarity. The angles in a triangle add up to 180°, so 5α = 180, giving α = 36°. So the angles of the golden triangle are thus 36°-72°-72°. The angles of the remaining obtuse isosceles triangle AXC (sometimes called the golden gnomon) are 36°-36°-108°.
Suppose XB has length 1, and we call BC length φ. Because of the isosceles triangles BC=XC and XC=XA, so these are also length φ. Length AC = AB, therefore equals φ+1. But triangle ABC is similar to triangle CXB, so AC/BC = BC/BX, and so AC also equals φ2. Thus φ2 = φ+1, confirming that φ is indeed the golden ratio.
The pentagram includes ten isosceles triangles: five acute and five obtuse isosceles triangles. In all of them, the ratio of the longer side to the shorter side is φ. The acute triangles are golden triangles. The obtuse isosceles triangles are golden gnomon.
Consider a triangle with sides of lengths a, b, and c in decreasing order. Define the "scalenity" of the triangle to be the smaller of the two ratios a/b and b/c. The scalenity is always less than φ and can be made as close as desired to φ.
The mathematics of the golden ratio and of the Fibonacci sequence are intimately interconnected. The Fibonacci sequence is:
The closed-form expression for the Fibonacci sequence involves the golden ratio:
The golden ratio is the limit of the ratios of successive terms of the Fibonacci sequence (or any Fibonacci-like sequence):
Therefore, if a Fibonacci number is divided by its immediate predecessor in the sequence, the quotient approximates φ; e.g., 987/610 ≈ 1.6180327868852. These approximations are alternately lower and higher than φ, and converge on φ as the Fibonacci numbers increase, and:
Furthermore, the successive powers of φ obey the Fibonacci recurrence:
This identity allows any polynomial in φ to be reduced to a linear expression. For example:
The defining quadratic polynomial and the conjugate relationship lead to decimal values that have their fractional part in common with φ:
The sequence of powers of φ contains these values 0.618..., 1.0, 1.618..., 2.618...; more generally, any power of φ is equal to the sum of the two immediately preceding powers:
As a result, one can easily decompose any power of φ into a multiple of φ and a constant. The multiple and the constant are always adjacent Fibonacci numbers. This leads to another property of the positive powers of φ:
If , then:
When the golden ratio is used as the base of a numeral system (see Golden ratio base, sometimes dubbed phinary or φ-nary), every integer has a terminating representation, despite φ being irrational, but every fraction has a non-terminating representation.
for n = 1, 2, 3, ..., until the difference between xn and xn−1 becomes zero, to the desired number of digits.
The Babylonian algorithm for √5 is equivalent to Newton's method for solving the equation x2 − 5 = 0. In its more general form, Newton's method can be applied directly to any algebraic equation, including the equation x2 − x − 1 = 0 that defines the golden ratio. This gives an iteration that converges to the golden ratio itself,
for an appropriate initial estimate x1 such as x1 = 1. A slightly faster method is to rewrite the equation as x − 1 − 1/x = 0, in which case the Newton iteration becomes
These iterations all converge quadratically; that is, each step roughly doubles the number of correct digits. The golden ratio is therefore relatively easy to compute with arbitrary precision. The time needed to compute n digits of the golden ratio is proportional to the time needed to divide two n-digit numbers. This is considerably faster than known algorithms for the transcendental numbers π and e.
An easily programmed alternative using only integer arithmetic is to calculate two large consecutive Fibonacci numbers and divide them. The ratio of Fibonacci numbers F25001 and F25000, each over 5000 digits, yields over 10,000 significant digits of the golden ratio.
Millions of digits of φ are available . See the web page of Alexis Irlande for the 17,000,000,000 first digits.
Both Egyptian pyramids and those mathematical regular square pyramids that resemble them can be analyzed with respect to the golden ratio and other ratios.
A pyramid in which the apothem (slant height along the bisector of a face) is equal to φ times the semi-base (half the base width) is sometimes called a golden pyramid. The isosceles triangle that is the face of such a pyramid can be constructed from the two halves of a diagonally split golden rectangle (of size semi-base by apothem), joining the medium-length edges to make the apothem. The height of this pyramid is times the semi-base (that is, the slope of the face is ); the square of the height is equal to the area of a face, φ times the square of the semi-base.
The medial right triangle of this "golden" pyramid (see diagram), with sides is interesting in its own right, demonstrating via the Pythagorean theorem the relationship or . This "Kepler triangle is the only right triangle proportion with edge lengths in geometric progression, just as the 3–4–5 triangle is the only right triangle proportion with edge lengths in arithmetic progression. The angle with tangent corresponds to the angle that the side of the pyramid makes with respect to the ground, 51.827... degrees (51° 49' 38").
A nearly similar pyramid shape, but with rational proportions, is described in the Rhind Mathematical Papyrus (the source of a large part of modern knowledge of ancient Egyptian mathematics), based on the 3:4:5 triangle; the face slope corresponding to the angle with tangent 4/3 is 53.13 degrees (53 degrees and 8 minutes). The slant height or apothem is 5/3 or 1.666... times the semi-base. The Rhind papyrus has another pyramid problem as well, again with rational slope (expressed as run over rise). Egyptian mathematics did not include the notion of irrational numbers, and the rational inverse slope (run/rise, multiplied by a factor of 7 to convert to their conventional units of palms per cubit) was used in the building of pyramids.
Another mathematical pyramid with proportions almost identical to the "golden" one is the one with perimeter equal to 2π times the height, or h:b = 4:π. This triangle has a face angle of 51.854° (51°51'), very close to the 51.827° of the golden triangle. This pyramid relationship corresponds to the coincidental relationship .
Egyptian pyramids very close in proportion to these mathematical pyramids are known.
One Egyptian pyramid is remarkably close to a "golden pyramid" – the Great Pyramid of Giza (also known as the Pyramid of Cheops or Khufu). Its slope of 51° 52' is extremely close to the "golden" pyramid inclination of 51° 50' and the π-based pyramid inclination of 51° 51'; other pyramids at Giza (Chephren, 52° 20', and Mycerinus, 50° 47') are also quite close. Whether the relationship to the golden ratio in these pyramids is by design or by accident remains controversial. Several other Egyptian pyramids are very close to the rational 3:4:5 shape.
Michael Rice asserts that principal authorities on the history of Egyptian architecture have argued that the Egyptians were well acquainted with the golden ratio and that it is part of mathematics of the Pyramids, citing Giedon (1957). Some recent historians of science have denied that the Egyptians had any such knowledge, contending rather that its appearance in an Egyptian building is the result of chance.
In 1859, the Pyramidologist John Taylor (1781-1864) claimed that in the Great Pyramid of Giza the golden ratio is represented by the ratio of the length of the face (the slope height), inclined at an angle θ to the ground, to half the length of the side of the square base, equivalent to the secant of the angle θ. The above two lengths were about 186.4 and 115.2 meters respectively. The ratio of these lengths is the golden ratio, accurate to more digits than either of the original measurements. Similarly, Howard Vyse, according to Matila Ghyka, reported the great pyramid height 148.2 m, and half-base 116.4 m, yielding 1.6189 for the ratio of slant height to half-base, again more accurate than the data variability.
Adding fuel to controversy over the architectural authorship of the Great Pyramid, Eric Temple Bell, mathematician and historian, claims that Egyptian mathematics would not have supported the ability to calculate the slant height of the pyramids, or the ratio to the height, except in the case of the 3:4:5 pyramid, since the 3:4:5 triangle was the only right triangle known to the Egyptians and they did not know the Pythagorean theorem nor any way to reason about irrationals such as π or φ.
Examples of disputed observations of the golden ratio include the following:
Studies in the area of visual communication and image representation reported from National Chiao Tung University.
Jul 20, 2010; "This paper addresses a novel multi-view visual hull mesh reconstruction for 3D imaging with a system quality control capability....
The (benign) fix was in at the Legislature; A pool was inadvertently built too close to a park. A tiny part of a big bill made satisfactory amends.(NEWS)
May 25, 2007; Byline: Warren Wolfe; Staff Writer It was a great idea: a new swimming pool near the house that businessman Robert Christensen... | http://www.reference.com/browse/Similar+triangle | 13 |
64 | (Redirected from Term Logic
Traditional logic, also known as term logic, is a loose term for the logical tradition that originated with Aristotle and survived broadly unchanged until the advent of modern predicate logic in the late nineteenth century.
It can sometimes be difficult to understand philosophy before the period of Frege and Russell without an elementary grasp of the terminology and ideas that were assumed by all philosophers until then. This article provides a basic introduction to the traditional system, with suggestions for further reading.
The fundamental assumption behind the theory is that propositions are composed of two terms - whence the name "two-term theory" or "term logic" – and that the reasoning process is in turn built from propositions:
- The term is a part of speech representing something, but which is not true or false in its own right, as "man" or "mortal".
- The proposition consists of two terms, in which one term (the "predicate") is "affirmed" or "denied" of the other (the "subject"), and which is capable of truth or falsity.
- The syllogism is an inference in which one proposition (the "conclusion") follows of necessity from two others (the "premises").
A proposition may be universal or particular, and it may be affirmative or negative. Thus there are just four kinds of propositions:
- A-type: universal and affirmative or ("All men are mortal")
- I-type: Particular and affirmative ("Some men are philosophers")
- E-type: Universal and negative ("No philosophers are rich")
- O-type: Particular and negative ("Some men are not philosophers").
This was called the fourfold scheme of propositions. (The origin of the letters A, I, E, and O are explained below in the section on syllogistic maxims.) The syllogistic is a formal theory explaining which combinations of true premises yield true conclusions.
A term (Greek horos) is the basic component of the proposition. The original meaning of the horos and also the Latin terminus is "extreme" or "boundary". The two terms lie on the outside of the proposition, joined by the act of affirmation or denial.
For Aristotle, a term is simply a "thing", a part of a proposition. For early modern logicians like Arnauld (whose Port Royal Logic is the most well-known textbook of the period) it is a psychological entity like an "idea" or "concept". Mill thought it is a word. None of these interpretations are quite satisfactory. In asserting that something is a unicorn, we are not asserting anything of anything. Nor does "all Greeks are men" say that the ideas of Greeks are ideas of men, or that word "Greeks" is the word "men". A proposition cannot be built from real things or ideas, but it is not just meaningless words either. This is a problem about the meaning of language that is still not entirely resolved. (See the book by Prior below for an excellent discussion of the problem).
In term logic, a "proposition" is simply a form of language: a particular kind of sentence, in subject and predicate are combined, so as to assert something true or false. It is not a thought, or an abstract entity or anything. The word "propositio" is from the Latin, meaning the first premise of a syllogism. Aristotle uses the word premise (protasis) as a sentence affirming or denying one thing of another (AP 1. 1 24a 16), so a premise is also a form of words.
However, in modern philosophical logic, it now means what is asserted as the result of uttering a sentence, and is regarded as something peculiar mental or intentional. Writers before Frege-Russell, such as Bradley, sometimes spoke of the "judgment" as something distinct from a sentence, but this is not quite the same. As a further confusion the word "sentence" derives from the Latin, meaning an opinion or judgment, and so is equivalent to "proposition".
The quality of a proposition is whether it is affirmative (the predicate is affirmed of the subject) or negative(the predicate is denied of the subject). Thus "every man is a mortal" is affirmative, since "mortal" is affirmed of "man". "No men are immortals" is negative, since "immortal" is denied of "man".
The quantity of a proposition is whether it is universal (the predicate is affirmed or denied of "the whole" of the subject) or particular (the predicate is affirmed or denied of only "part of" the subject).
The distinction between singular and universal is fundamental to Aristotle's metaphysics, and not merely grammatical. A singular term for Aristotle is that which is of such a nature as to be predicated of only one thing, thus "Callias". (De Int 7). It is not predicable of more than one thing: "Socrates is not predicable of more than one subject, and therefore we do not say every Socrates as we say every man". (Metaphysics D 9, 1018 a4). It may feature as a grammatical predicate, as in the sentence "the person coming this way is Callias". But it is still a logical subject.
He contrasts it with "universal" (katholou - "of a whole"). Universal terms are the basic materials of Aristotle's logic, propositions containing singular terms do not form part of it at all. They are mentioned briefly in the De Interpretatione. Afterwards, in the chapters of the Prior Analytics where Aristotle methodically sets out his theory of the syllogism, they are entirely ignored.
The reason for this omission is clear. The essential feature of term logic is that, of the four terms in the two premises, one must occur twice. Thus
- All greeks are men
- All men are mortal.
What is subject in one premise, must be predicate in the other, and so it is necessary to eliminate from the logic, any terms which cannot function both as subject and predicate. Singular terms do not function this way, so they are omitted from Aristotle's syllogistic.
In later versions of the syllogistic, singular terms were treated as universals. See for example (where it is clearly stated as received opinion) Part 2, chapter 3, of the Port Royal Logic. Thus
- All men are mortals
- All Socrates are men
- All Socrates are mortals
This is clearly awkward, and is a weakness exploited by Frege in his devastating attack on the system (from which, ultimately, it never recovered). See concept and object.
The famous syllogism "Socrates is a man ...", is frequently quoted as though from Aristotle. See for example Kapp, Greek Foundations of Traditional Logic, New York 1942, p.17, Copleston A history of Philosophy Vol. I. P. 277, Russell, A History of Western Philosophy London 1946 p. 218. In fact it is nowhere in the Organon. It is first mentioned by Sextus Empiricus (Hyp. Pyrrh. ii. 164).
- Main article: Syllogism
There can only be three terms in the syllogism, since the two terms in the conclusion are already in the premises, and one term is common to both premises. This leads to the following definitions:
- The predicate in the conclusion is called the major term, "P"
- The subject in the conclusion is called the minor term, "S"
- The common term is called the middle term "M"
- The premise containing the major term is called the 'major premise
- The premise containing the minor term is called the 'minor premise
The syllogism is always written major premise, minor premise,conclusion. Thus the syllogism of the form AII is written as
- A M-P All cats are carnivorous
- I S-M Some mammals are cats
- I S-P Some mammals are carnivorous
Mood and figure
The mood of a syllogism is distinguished by the quality and quantity of the two premises. There are eight valid moods: AA, AI, AE, AO, IA, EA, EI, OA.
The figure of a syllogism is determined by the position of the middle term. In figure 1, which Aristotle thought the most important, since it reflects our reasoning process most closely, the middle term is subject in the major, predicate in the minor. In figure 2, it is predicate in both premises. In figure 3, it is subject in both premises. In figure 4 (which Aristotle did not discuss, however), it is predicate in the major, subject in the minor. Thus
|Figure 1||Figure 2||Figure 3||Figure 4
Conversion and reduction
Conversion is the process of changing one proposition into another simply by re-arranging the terms. Simple conversion is a change which preserves the meaning of the proposition. Thus
- Some S is a P converts to Some P is an S
- No S are P converts to no P are S
Conversion per accidens involves changing the proposition into another which is implied it,but not the same. Thus
- All S are P converts to Some S are P
(Notice that for conversion per accidens to be valid, there is an existential assumption involved in "all S are P")
As explained, Aristotle thought that only in the first or perfect figure was the process of reasoning completely transparent. Their validity of an imperfect syllogism is only evident, when by conversion of its premises, it can be turned into some mood of the first figure. This was called reduction by the scholastic philosophers.
It is easiest to explain the rules of reduction, using the so-called mnemonic lines first introduced by William of Shyreswood (1190 - 1249) in a manual written in the first half of the thirteenth century.
- Barbara, Celarent, Darii, Ferioque, prioris
- Cesare, Camestres, Festino, Baroco, secundae
- Tertia, Darapti, Disamis, Datisi, Felapton, Bocardo, Ferison, habet
- Quarta insuper addit Bramantip, Camenes, Dimaris, Fesapo, Fresison.
Each word represents the formula of a valid mood and is interpreted according to the following rules:
- The first three vowels indicate the quantity and quality of the three propositions, thus Barbara: AAA, Celarent, EAE and so on.
- The initial consonant of each formula after the first four indicates that the mood is to be reduced to that mood among the first four which has the same initial
- "s" immediately after a vowel signifies that the corresponding proposition is to be converted simply during reduction,
- "p" in the same position indicates that the proposition is to be converted partially or per accidens,
- "m" between the first two vowels of a formula signifies that the premises are to be transposed,
- "c" appearing after one of the first two vowels signifies that the premise is to be replaced by the negative of the conclusion for reduction per impossibile.
There are a number of maxims and verses associated with the syllogistic. Their origin is mostly unknown. For example
The letters A, I, E, and O are taken from the vowels of the Latin Affirmo and Nego.
- Asserit A, negat E, sed universaliter ambae
- Asserit I, negat O, sed particulariter ambo
Shyreswood's version of the "Barbara" verses is as follows:
- Barbara celarent darii ferio baralipton
- Celantes dabitis fapesmo frisesomorum;
- Cesare campestres festino baroco; darapti
- Felapton disamis datisi bocardo ferison.
- Barbara, Celarent, Darii, Ferioque prioris
- Cesare, Camestres, Festino, Baroco secundae
- Tertia grande sonans recitat Darapti, Felapton
- Disamis, Datisi, Bocardo, Ferison.
- Quartae Sunt Bamalip, Calames, Dimatis, Fesapo, Fresison.
Decline of term logic
Term logic dominated logic throughout most of its history until the advent of modern or predicate logic a century ago, in the late nineteenth and early twentieth century, which led to its eclipse.
The decline was ultimately due to the superiority of the new logic in the mathematical reasoning for which it was designed. Term logic cannot, for example, explain the inference from "every car is a vehicle", to "every owner of a car is an owner of an vehicle ", which is elementary in predicate logic. It is confined to syllogistic arguments, and cannot explain inferences involving multiple generality. Relations and identity must be treated as subject-predicate relations, which makes the identity statements of mathematics difficult to handle, and of course the singular term and singular proposition, which is essential to modern predicate logic, does not properly feature at all.
Note, however, that the decline was a protracted affair. It is simply not true that there was a brief "Frege Russell" period 1890-1910 in which the old logic vanished overnight. The process took more like 70 years. Even Quine's Methods of Logic devotes considerable space to the syllogistic, and Joyce's manual, whose final edition was in 1949, does not mention Frege or Russell at all.
The innovation of predicate logic led to an almost complete abandonment of the traditional system. It is customary to revile or disparage it in standard textbook introductions. However, it is not entirely in disuse. Term logic was still part of the curriculum in many Catholic schools until the late part of the twentieth century, and taught in places even today. More recently, some philosophers have begun work on a revisionist programme to reinstate some of the fundamental ideas of term logic. Their main complaint about modern logic is
- that Predicate Logic is in a sense unnatural, in that its syntax does not follow the syntax of the sentences that figure in our everyday reasoning. It is, as Quine acknowledges, "Procrustean" employing an artificial language of function and argument, quantifier and bound variable.
- that there are still embarrassing theoretical problems faced by Predicate Logic. Possibly the most serious are of empty names, and of identity statements .
Even orthodox and entirely mainstream philosophers such as Gareth Evans have voiced discontent:
- "I come to semantic investigations with a preference for homophonic theories; theories which try to take serious account of the syntactic and semantic devices which actually exist in the language ...I would prefer [such] a theory ... over a theory which is only able to deal with [sentences of the form "all A's are B's"] by "discovering" hidden logical constants ... The objection would not be that such [Fregean] truth conditions are not correct, but that, in a sense which we would all dearly love to have more exactly explained, the syntactic shape of the sentence is treated as so much misleading surface structure" Evans (1977)
Fred Sommers has designed a formal logic which he claims is consistent with our innate logical abilities, and which resolves the philosophical difficulties. See, for example, his seminal work The Logic of Natural Language. The problem, as Sommers says, is that "the older logic of terms is no longer taught and modern predicate logic is too difficult to be taught". School children a hundred years ago were taught a usable form of formal logic, today – in the information age – they are taught nothing.
- Joyce, G.H. Principles of Logic, 3rd edition, London 1949. This was amanual written for (Catholic) schools, probably in the early 1910's. It is spendidly out of date, there being no hint even of the existence of modern logic, yet it is completely authoritative within its own subject area. There are also many useful references to medieval and ancient sources.
- Lukasiewicz, J., Aristotle's Syllogistic, Oxford 1951. An excellent, meticulously researched book by one of the founding fathers of modern logic, though his propaganda for the modern system comes across, these days, as a little strident.
- Prior, A.N. The Doctrine of Propositions & Terms London 1976. An excellent book that covers the philosophy around the syllogistic.
- Mill, J.S. A System of Logic, (8th edition) London 1904. The eighth edition is the best, containing all the original plus later notes written by Mill. Much of it is propaganda for Mill's philosophy, but it contains many useful thoughts on the syllogistic, and it is a historical document, as it was so widely read in Europe and America. It may have been an influence on Frege.
- Aristotle, Analytica Posteriora Books I & II, transl. G.R.G.Mure, in The Works of Aristotle ed. Ross Oxford 1924. Ross's edition is still (in this writers view) the best English translation of Aristotle. There are still many copies available on the second hand market, handsomely bound and beautiful.
- Evans, G. "Pronouns, Quantifiers and Relative Clauses" Canadian Journal of Philosophy 1977
- Sommers, F. The Logic of Natural Language, Oxford 1982. An overview and analysis of the history of term logic, and a critique of the logic of Frege. | http://www.biologydaily.com/biology/Term_Logic | 13 |
72 | A circular sector
or circle sector
, is the portion of a disk
In geometry, a disk is the region in a plane bounded by a circle.A disk is said to be closed or open according to whether or not it contains the circle that constitutes its boundary...
enclosed by two radii
In classical geometry, a radius of a circle or sphere is any line segment from its center to its perimeter. By extension, the radius of a circle or sphere is the length of any such segment, which is half the diameter. If the object does not have an obvious center, the term may refer to its...
and an arc
In geometry, an arc is a closed segment of a differentiable curve in the two-dimensional plane; for example, a circular arc is a segment of the circumference of a circle...
, where the smaller area is known as the minor sector and the larger being the major sector. In the diagram, θ is the central angle
A central angle is an angle which vertex is the center of a circle, and whose sides pass through a pair of points on the circle, thereby subtending an arc between those two points whose angle is equal to the central angle itself...
Radian is the ratio between the length of an arc and its radius. The radian is the standard unit of angular measure, used in many areas of mathematics. The unit was formerly a SI supplementary unit, but this category was abolished in 1995 and the radian is now considered a SI derived unit...
the radius of the circle, and
is the arc length of the minor sector.
A sector with the central angle of 180° is called a semicircle
In mathematics , a semicircle is a two-dimensional geometric shape that forms half of a circle. Being half of a circle's 360°, the arc of a semicircle always measures 180° or a half turn...
. Sectors with other central angles are sometimes given special names, these include quadrants
(60°) and octants
The angle formed by connecting the endpoints of the arc to any point on the circumference that is not in the sector is equal to half the central angle.
The total area of a circle is
. The area of the sector can be obtained by multiplying the circle's area by the ratio of the angle and
(because the area of the sector is proportional to the angle, and
is the angle for the whole circle):
Another approach is to consider this area as the result of the following integral :
Converting the central angle into degree
A degree , usually denoted by ° , is a measurement of plane angle, representing 1⁄360 of a full rotation; one degree is equivalent to π/180 radians...
The length of the perimeter
A perimeter is a path that surrounds an area. The word comes from the Greek peri and meter . The term may be used either for the path or its length - it can be thought of as the length of the outline of a shape. The perimeter of a circular area is called circumference.- Practical uses :Calculating...
of a sector is the sum of the arc length and the two radii: | http://www.absoluteastronomy.com/topics/Circular_sector | 13 |
59 | Volume of Cubes and Cuboids
In volume of cubes and cuboids we will discuss how to calculate volume in different questions.
What is a Volume?
The volume of any 3-dimensional solid figure is the measure of space occupied by the solid. In case of a hollow 3-dimensional figure, the volume of the body is the difference in space occupied by the body and amount of space inside the body.
We also come across different hollow objects in our daily life. These hollow objects can be filled with air or liquid that takes the shape of the container. Here, the volume of the air or the liquid that the interior of the hollow object can accommodate is called the capacity of the hollow object.
Thus, the measure of space an object occupies is called its volume. The capacity of an object is the volume of substance its interior can accommodate.
● The units for measuring volume are cubic units, i.e., cm2, m2, etc.
● The volume can be measured in litres or milliliters. In such cases, volume is known as capacity.
Standard Unit of Volume:
Volume is always measured in cubic units. The standard unit volume is 1 cm3 but there are various other units of measurement of length like m, dm, dam, etc., so we have many other standards of measurement of volume.
Let’s observe the chart to understand the relation between the various units of volume.
A cuboid is made of six rectangular regions called faces. It has 6 faces. They are ABCD (top face), EFGH (bottom face), ABGH (front face), DEFC (back face), ADEH and BCFG are side faces.
Thus, a cuboid is made up of 3 pairs of congruent rectangular faces (top, bottom); (front, back); (side face).
Face EFGH is called the base of the cuboid.
Front face ABGH, back face DEFC and side faces ADEH and BCFG are called the lateral faces of the cuboid.
Any two faces other than opposite faces meet in a line segment which is called an edge of the cuboid. The cuboid has 12 edges AB, BC, CD, DA, EF, FG, GH, HE, AH, BG, DE and CF. The three edges meet at a common point called the vertex. A cuboid has 8 vertices, namely A, B, C, D, E, F, G and H.
Now we will discuss about the volume of cubes and cuboids.
Volume of cuboid:
Let l, b, h represent length, breadth and height of the cuboid.
Area of the rectangular base EFGH of the cuboid = l × b.
Volume of the cuboid = (Area of base) × (height of the cuboid) = (l × b) × h = lbh
Let us consider a cuboid of length ‘l’, breadth ‘b’ and height ‘h’.
Then the volume of the cuboid is given by …………
● Volume = length × breadth × height
● Length of cuboid = Volume/(breadth × height)
● Breadth of the cuboid = Volume/(length × height)
● Height of the cuboid = Volume/(length × breadth)
While finding the volume of cuboid, length, breadth and height must be expressed in the same units.
Volume of cube:
It is a special type of cuboid whose length, breadth and height are equal. So, the volume of the cube whose edge is l is expressed as ……….
Volume of the cube = l × l × l = l3
If the length of the cube or the edge is 1 unit, then it is referred to as 1 unit cube.
● Volume and Surface Area of Solids
Volume of Cubes and Cuboids
Worked-out Problems on Volume of a Cuboid
7th Grade Math Problems
8th Grade Math Practice
From Volume of Cubes and Cuboids to HOME PAGE | http://www.math-only-math.com/volume-of-cubes-and-cuboids.html | 13 |
71 | Science Fair Project Encyclopedia
This article is about the arithmetic operation. For other uses, see Division (disambiguation).
- a × b = c,
where b is not zero, then
- a = c ÷ b
(read as "c divided by b"). For instance, 6 ÷ 3 = 2 since 2 × 3 = 6.
In the above expression, a is called the quotient, b the divisor and c the dividend.
The expression c ÷ b is also written "c/b" (read "c over b"), especially in higher mathematics (including applications to science and engineering) and in computer programming languages. This form is also often used as the final form of a fraction, without any implication that it needs to be evaluated further.
Division by zero is usually not defined.
With a knowledge of multiplication tables, two integers can be divided on paper using the method of long division. If the dividend has a fractional part (expressed as a decimal fraction), one can continue the algorithm past the ones place as far as desired. If the divisor has a fractional part, one can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction.
Division can be calculated with an abacus by repeatedly placing the dividend on the abacus, and then subtracting the divisor the offset of each digit in the result, counting the number of divisions possible at each offset.
In modular arithmetic, some numbers have a multiplicative inverse with respect to the modulus. In such a case, division can be calculated by multiplication. This approach is useful in computers that do not have a fast division instruction.
Division of integers
Division of integers is not closed. Apart from division by zero being undefined, the quotient will not be an integer unless the dividend is an integer multiple of the divisor; for example 26 cannot be divided by 10 to give an integer. In such a case there are four possible approaches.
- Say that 26 cannot be divided by 10.
- Give the answer as a decimal fraction or a mixed number, so 26 ÷ 10 = 2.6 or . This is the approach usually taken in mathematics.
- Give the answer as a quotient and a remainder, so 26 ÷ 10 = 2 remainder 6.
- Give the quotient as the answer, so 26 ÷ 10 = 2. This is sometimes called integer division.
One has to be careful when performing division of integers in a computer program. Some programming languages, such as C, will treat division of integers as in case 4 above, so the answer will be an integer. Other languages, such as MATLAB, will first convert the integers to real numbers, and then give a real number as the answer, as in case 2 above.
Division of rational numbers
The result of dividing two rational numbers is another rational number when the divisor is not 0. We may define division of two rational numbers p/q and r/s by
All four quantities are integers, and only p may be 0. This definition ensures that division is the inverse operation of multiplication.
Division of real numbers
Division of two real numbers results in another real number when the divisor is not 0. It is defined such a/b = c if and only if a = cb and b ≠ 0.
Division of complex numbers
Dividing two complex numbers results in another complex number when the divisor is not 0, defined thus:
All four quantities are real numbers. r and s may not both be 0.
Division for complex numbers expressed in polar form is simpler and easier to remember than the definition above:
Again all four quantities are real numbers. r may not be 0.
Division of polynomials
One can define the division operation for polynomials. Then, as in the case of integers, one has a remainder. See polynomial long division.
Division in abstract algebra
In abstract algebras such as matrix algebras and quaternion algebras, fractions such as are typically defined as or where b is presumed to be an invertible element (i.e. there exists a multiplicative inverse b - 1 such that bb - 1 = b - 1b = 1 where 1 is the multiplicative identity). In an integral domain where such elements may not exist, division can still be performed on equations of the form ab = ac or ba = ca by left or right cancellation, respectively. More generally "division" in the sense of "cancellation" can be done in any ring with the aforementioned cancellation properties. By a theorem of Wedderburn, all finite division rings are fields, hence every nonzero element of such a ring is invertible, so division by any nonzero element is possible in such a ring. To learn about when algebras (in the technical sense) have a division operation, refer to the page on division algebras. In particular Bott periodicity can be used to show that any real normed division algebra must be isomorphic to either the real numbers R, the complex numbers C, the quaternions H, or the octonions O.
Division and calculus
There is no general method to integrate the quotient of two functions.
- Rational number
- Vulgar fraction
- Inverse element
- Division by two
- Division by zero
- Field (algebra)
- Division algebra
- Division ring
- Long division
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Division | 13 |
57 | Steven Bogart, a mathematics instructor at Georgia Perimeter College, provides the following explanation:
Image: JOHN GRIMES
Succinctly, pi--which is written as the Greek letter for p, or --is the ratio of the circumference of any circle to the diameter of that circle. Regardless of the circle's size, this ratio will always equal pi. In decimal form, the value of pi is approximately 3.14. But pi is an irrational number, meaning that its decimal form neither ends (like 1/4 = 0.25) nor becomes repetitive (like 1/6 = 0.166666...). (To only 18 decimal places, pi is 3.141592653589793238.) Hence, it is useful to have shorthand for this ratio of circumference to diameter. According to Petr Beckmann's A History of Pi, the Greek letter was first used for this purpose by William Jones in 1706, probably as an abbreviation of periphery, and became standard mathematical notation roughly 30 years later.
Try a brief experiment: Using a compass, draw a circle. Take one piece of string and place it on top of the circle, exactly once around. Now straighten out the string; its length is called the circumference of the circle. Measure the circumference with a ruler. Next, measure the diameter of the circle, which is the length from any point on the circle straight through its center to another point on the opposite side. (The diameter is twice the radius, the length from any point on the circle to its center.) If you divide the circumference of the circle by the diameter, you will get approximately 3.14--no matter what size circle you drew! A larger circle will have a larger circumference and a larger radius, but the ratio will always be the same. If you could measure and divide perfectly, you would get 3.141592653589793238..., or pi.
Otherwise said, if you cut several pieces of string equal in length to the diameter, you will need a little more than three of them to cover the circumference of the circle.
Pi is most commonly used in certain computations regarding circles. Pi not only relates circumference and diameter. Amazingly, it also connects the diameter or radius of a circle with the area of that circle by the formula: the area is equal to pi times the radius squared. Additionally, pi shows up often unexpectedly in many mathematical situations. For example, the sum of the infinite series
The importance of pi has been recognized for at least 4,000 years. A History of Pi notes that by 2000 B.C., "the Babylonians and the Egyptians (at least) were aware of the existence and significance of the constant ," recognizing that every circle has the same ratio of circumference to diameter. Both the Babylonians and Egyptians had rough numerical approximations to the value of pi, and later mathematicians in ancient Greece, particularly Archimedes, improved on those approximations. By the start of the 20th century, about 500 digits of pi were known. With computation advances, thanks to computers, we now know more than the first six billion digits of pi. | http://www.scientificamerican.com/article.cfm?id=what-is-pi-and-how-did-it | 13 |
173 | Addition is the mathematical process of putting things together. The plus sign "+" means that two numbers are added together. For example, in the picture on the right, there are 3 + 2 apples — meaning three apples and two other apples — which is the same as five apples, since 3 + 2 = 5. Besides counts of fruit, addition can also represent combining other physical and abstract quantities using different kinds of numbers: negative numbers, fractions, irrational numbers, vectors, and more.
As a mathematical operation, addition follows several important patterns. It is commutative, meaning that order does not matter, and it is associative, meaning that one can add more than two numbers (see Summation). Repeated addition of 1 is the same as counting; addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. All of these rules can be proven, starting with the addition of natural numbers and generalizing up through the real numbers and beyond. General binary operations that continue these patterns are studied in abstract algebra.
Performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months and even some animals. In primary education, children learn to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day.
There are also situations where addition is "understood" even though no symbol appears:
The numbers or the objects to be added are generally called the "terms", the "addends", or the "summands"; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the symmetry of addition, "augend" is rarely used, and both terms are generally called addends.
All of this terminology derives from Latin. "addition" and "add" are English words derived from the Latin verb addere, which is in turn a compound of ad "to" and dare "to give", from the Indo-European root do- "to give"; thus to add is to give to. Using the gerundive suffix -nd results in "addend", "thing to be added". Likewise from augere "to increase", one gets "augend", "thing to be increased".
"Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verb summare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was once common to add upward, contrary to the modern practice of adding downward, so that a sum was literally higher than the addends.
Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer.
Possibly the most fundamental interpretation of addition lies in combining sets:
This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics; for the rigorous definition it inspires, see Natural numbers below. However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers.
One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than just combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods.
The sum a + b can be interpreted as a binary operation that combines a and b, in an algebraic sense, or it can be interpreted as the addition of b more units to a. Under the latter interpretation, the parts of a sum a + b play asymmetric roles, and the operation a + b is viewed as applying the unary operation +b to a. Instead of calling both a and b addends, it is more appropriate to call a the augend in this case, since a plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation. and vice versa.
Addition is commutative, meaning that one can reverse the terms in a sum left-to-right, and the result will be the same. Symbolically, if a and b are any two numbers, then
A somewhat subtler property of addition is associativity, which comes up when one tries to define repeated addition. Should the expression
In the context of integers, addition of one also plays a special role: for any integer a, the integer (a + 1) is the least integer greater than a, also known as the successor of a. Because of this succession, the value of some a + b can also be seen as the successor of a, making addition iterated succession.
Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaques and cottontop tamarins performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training.
The prerequisite to addition in the decimal system is the internalization of the 100 single-digit "addition facts". One could memorize all the facts by rote, but pattern-based strategies are more enlightening and, for most people, more efficient:
In traditional mathematics, to add multidigit numbers, one typically aligns the addends vertically and adds the columns, starting from the ones column on the right. If a column exceeds ten, the extra digit is "carried" into the next column. For a more detailed description of this algorithm, see Elementary arithmetic: Addition. An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many different standards-based mathematics methods, but many mathematics curricula such as TERC omit any instruction in traditional methods familiar to parents or mathematics professionals in favor of exploration of new methods.
Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier.
Addition is also fundamental to the operation of digital computers, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance.
Adding machines, mechanical calculators whose primary function was addition, were the earliest automatic, digital computers. Wilhelm Schickard's 1623 Calculating Clock could add and subtract, but it was severely limited by an awkward carry mechanism. As he wrote to Johannes Kepler describing the novel device, "You would burst out laughing if you were present to see how it carries by itself from one column of tens to the next..." Adding 999,999 and 1 on Schickard's machine would require enough force to propagate the carries that the gears might be damaged, so he limited his machines to six digits, even though Kepler's work required more. By 1642 Blaise Pascal independently developed an adding machine with an ingenious gravity-assisted carry mechanism. Pascal's calculator was limited by its carry mechanism in a different sense: its wheels turned only one way, so it could add but not subtract, except by the method of complements. By 1674 Gottfried Leibniz made the first mechanical multiplier; it was still powered, if not motivated, by addition.
Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm taught to children. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing 999 + 1, but one bypasses the group of 9s and skips to the answer.
Since they compute digits one at a time, the above methods are too slow for most modern purposes. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all the floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Almost all modern implementations are, in fact, hybrids of these last three designs.
Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish. In modern times, the ADD instruction of a microprocessor replaces the augend with the sum but preserves the addend. In a high-level programming language, evaluating a + b does not change either a or b; to change the value of a one uses the addition assignment operator a += b.
Here, A U B is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism which allows any common elements to be separated out and therefore counted twice.
The other popular definition is recursive:
Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the Recursion Theorem on the poset N². On the other hand, some sources prefer to use a restricted Recursion Theorem that applies only to the set of natural numbers. One then considers a to be temporarily "fixed", applies recursion on b to define a function "a + ", and pastes these unary operations for all a together to form the full binary operation.
This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, through mathematical induction; for examples of such inductive proofs, see Addition of natural numbers.
The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive or negative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases:
Although this definition can be useful for concrete problems, it is far too complicated to produce elegant general proofs; there are too many cases to consider.
A much more convenient conception of the integers is the Grothendieck group construction. The essential observation is that every integer can be expressed (not uniquely) as the difference of two natural numbers, so we may as well define an integer as the difference of two natural numbers. Addition is then defined to be compatible with subtraction:
The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic. For a more rigorous and general discussion, see field of fractions.
A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be a Dedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers a and b is defined element by element:
This definition was first published, in a slightly modified form, by Richard Dedekind in 1872. The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses.
Unfortunately, dealing with multiplication of Dedekind cuts is a case-by-case nightmare similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the a limit of a Cauchy sequence of rationals, lim an. Addition is defined term by term:
This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different. One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions.
There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of abstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory.
In modular arithmetic, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. In geometry, the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori.
The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups.
In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as Direct sum and Wedge sum, are named to evoke their connection with addition.
Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction.
Multiplication can be thought of as repeated addition. If a single term x appears in a sum n times, then the sum is the product of n and x. If n is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number.
In the real and complex numbers, addition and multiplication can be interchanged by the exponential function:
There are even more generalizations of multiplication than addition. In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product (1 + 1)(a + b) in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general.
Division is an arithmetic operation remotely related to addition. Since a/b = a(b−1), division is right distributive over addition: (a + b) / c = a / c + b / c. However, division is not left distributive over addition; 1/ (2 + 2) is not the same as 1/2 + 1/2.
The maximum operation "max (a, b)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of different orders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straightforward calculation of (a + b) − b can accumulate an unacceptable round-off error, perhaps even returning zero. See also Loss of significance.
The approximation becomes exact in a kind of infinite limit; if either a or b is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two. Accordingly, there is no subtraction operation for infinite cardinals.
Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition:
Tying these observations together, tropical addition is approximately related to regular addition through the logarithm:
Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is zero. An infinite summation is a delicate procedure known as a series.
Counting a finite set is equivalent to summing 1 over the set.
Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics.
Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition.
Vibrant addition or future eyesore in the Old Port? ; A controversial plan to change a historic Portland building will go to the Planning Board next month.
May 29, 2004; KELLEY BOUCHARD Staff Writer Portland Press Herald (Maine) 05-29-2004 Vibrant addition or future eyesore in the Old Port?... | http://www.reference.com/browse/addition | 13 |
56 | A circular curve is often specified by its radius. A small circle is easily laid out by using the radius. In a mathematical sense, the curvature is the reciprocal of the radius, so that a smaller curvature implies a large radius. A curve of large radius, as for a railway, cannot be laid out by using the radius directly. We will see how the problem of laying out a curve of large radius is solved. In American railway practice, the radius is not normally used for specifying a curve. Instead, a number called the degree of curvature is used. This is indeed a curvature, since a larger value means a smaller radius. The reason for this choice is to facilitate the computations necessary to lay out a curve with surveying instruments, a transit and a 100-ft engineer's tape. It is more convenient to choose round values of the degree of curvature, rather than round values for the radius, for then the transit settings can often be calculated mentally. A curve begins at the P.C., or point of curvature, and extends to the P.T., or point of tangency. The important quantities in a circular curve are illustrated above.
The degree of curvature is customarily defined in the United States as the central angle D subtended by a chord of 100 feet. The reason for the choice of the chord rather than the actual length of circumference is that the chord can be measured easily and directly simply by stretching the tape between its ends. A railway is laid out in lengths called stations of one tape length, or 100 feet. This continues through curves, so that the length is always the length of a series of straight lines that can be directly measured. The difference between this length, and the actual length following the curves, is inconsequential, while the use of the polygonal length simplifies the calculations and measurements greatly.
The relation between the central angle d and the length c of a chord is simply R sin(d/2) = c/2, or R = c/(2 sin d/2). When c = 100, this becomes R = 50/sin D/2, where D is the degree of curvature. Since sin D/2 is approximately D/2, when D is expressed in radians, we have approximately that R = 5729.65/D, or R = 5730/D. Accurate values of R should be calculated using the sine. For example, a 2° curve has R = 2864.93 (accurate), while 5730/D = 2865 ft.
If some other value and length unit are chosen, simply replace 100 by the new value. In the metric system, 20 meters is generally used as the station interval instead of 100 ft, though stations are numbered as multiples of 10 m, and these equations are modified accordingly. With a 20 m chord, R = 1146/D m,or about 3760/D ft. Of course, a given curve has different degrees of curvature in the two systems. There are several methods of defining degree of curvature for metric curves. D may be the central angle for a chord of 10 m instead of 20 m.
The deflection from the tangent for a chord of length c is half the central angle, or δ = d/2. This is a general rule, so additional 100 ft chords just increase the deflection angle by D/2. Therefore, it is very easy to find the deflection angles if a round value is chosen for D, and usually easy to set them off on the instrument. For example, if a curve begins at station 20+34.0 and ends at station 28+77.3, the first subchord is 100 - 34.0 = 66.0 ft to station 21, then 7 100 ft chords, and finally a subchord of 77.3 ft. The deflection angle from the P.C. to the P.T. for a 2° curve is 0.660 + 7 x 1.0 + 0.773 = 8.433 °, or 8° 26'. I have used the approximate relation δ = (c/100)(D/2) to find the deflection angles for the subchords.
The long chord C from P.C. to P.T. is a valuable check, easily determined with modern distance-measuring equipment. It is C = 2R sin (I/2), where I the total central angle. For the example, C = 2(2864.93)sin(8.433) = 840.32 ft. The length of the curve, by stations, is 843.30 ft. This figure can be checked by actual measurements in the field. The actual arc length of the curve is (2864.93)(0.29437) = 843.34 ft. Note that this is the arc length on the centre line; for the rails, use R ± g/2, where g = 4.7083 ft = 56.5 in = 1435 mm for standard gauge.
Before electronic calculators, small-angle approximations and tables of logarithms were used to carry out the computations for curves. Now, things are much easier, and I write the equations in a form suitable for scientific pocket calculators, instead of using the traditional forms that use tabular values and approximations.
A 1° curve has a radius of 5729.65 feet. Curves of 1° or 2° are found on high-speed lines. A 6° curve, about the sharpest that would be generally found on a main line, has a radius of 955.37 feet. On early American railroads, some curves were as sharp as 400 ft radius, or 14.4°. Street railways have even sharper curves. The sharpest curve that can be negotiated by normal diesel locomotives is not less than 250 ft radius, or 23°. It is not difficult to apply spirals, in which the change of curvature is proportional to distance, to the ends of a circular curve. Circular curves are a good first approximation to an alignment.
The centrifugal acceleration in a curve of radius R negotiated at speed v is a = v2/R. If v is in mph, a = 2.1511v2/R = 3.754 x 10-4Dv2 ft/s2, where D is degrees of curvature. This is normal to the gravitational acceleration of 32.16 ft/s2, and the total acceleration is the vector sum of these. For comfort, a maximum ratio of a to g may be taken as 0.1 (tan-1 5.71°). The overturning speed depends on the height of the centre of gravity, and occurs when a line drawn from the centre of gravity parallel to the resultant acceleration passes through one rail. The height of the centre of gravity of American railway equipment is 10 ft or less. Taking 10 ft as the height of the centre of gravity, a/g = 0.2354 (tan-1 13/25°). Therefore, the overturning speed vo can be estimated by Dvo2 = 20,000 and the comfort speed vc by Dvc2 = 8500.
A curve may be superelevated by an amount s so that the resultant acceleration is more normal to the track. Exact compensation occurs only for one speed, of course. This angle of bank is given by tan θ = a/g = 1.167 x 10-5Dv2, and sin θ = s/gauge. Consider a 2° curve. For v = 60 mph, tan θ = 0.08404, sin θ = 0.08375 and s = 4.73 in. If the speed is greater than this, there will be an unbalanced acceleration, which will have a ratio of a/g of 0.1 at a speed v' given by 0.1 = 1.167 x 10-5D(v'2 - v2), or v' = 89 mph. The overturning speed on this curve is given by (0.2354 + 0.08404) = (1.167 x10-5)Dv2, or v = 117 mph. Note that a large superelevation will cause the flanges of a slow-moving train to grind the lower rail. Superelevation is generally limited to 6 to 8 in maximum.
Composed by James B. Calvert
Created 1 June 1999
Last Revised 20 June 2004 | http://mysite.du.edu/~jcalvert/railway/degcurv.htm | 13 |
81 | In mathematics, a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates. Thus a volume element is an expression of the form
where the are the coordinates, so that the volume of any set can be computed by
For example, in spherical coordinates , and so .
The notion of a volume element is not limited to three-dimensions: in two-dimensions it is often known as the area element, and in this setting it is useful for doing surface integrals. Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula). This fact allows volume elements to be defined as a kind of measure on a manifold. On an orientable differentiable manifold, a volume element typically arises from a volume form: a top degree differential form. On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density.
Special cases
Volume element of a linear subspace
To find the volume element of the subspace, it is useful to know the fact from linear algebra that the volume of the parallelepiped spanned by the is the square root of the determinant of the Gramian matrix of the :
Any point p in the subspace can be given coordinates such that
At a point p, if we form a small parallelepiped with sides , then the volume of that parallelepiped is the square root of the determinant of the Grammian matrix
This therefore defines the volume form in the linear subspace.
Volume element of a surface
thus defining a surface embedded in . In two dimensions, volume is just area, and a volume element gives a way to determine the area of parts of the surface. Thus a volume element is an expression of the form
that allows one to compute the area of a set B lying on the surface by computing the integral
Here we will find the volume element on the surface that defines area in the usual sense. The Jacobian matrix of the mapping is
with index i running from 1 to n, and j running from 1 to 2. The Euclidean metric in the n-dimensional space induces a metric on the set U, with matrix elements
The determinant of the metric is given by
For a regular surface, this determinant is non-vanishing; equivalently, the Jacobian matrix has rank 2.
Now consider a change of coordinates on U, given by a diffeomorphism
so that the coordinates are given in terms of by . The Jacobian matrix of this transformation is given by
In the new coordinates, we have
and so the metric transforms as
where is the pullback metric in the v coordinate system. The determinant is
Given the above construction, it should now be straightforward to understand how the volume element is invariant under an orientation-preserving change of coordinates.
In two dimensions, the volume is just the area. The area of a subset is given by the integral
Thus, in either coordinate system, the volume element takes the same expression: the expression of the volume element is invariant under a change of coordinates.
Note that there was nothing particular to two dimensions in the above presentation; the above trivially generalizes to arbitrary dimensions.
Example: Sphere
For example, consider the sphere with radius r centered at the origin in R3. This can be parametrized using spherical coordinates with the map
and the volume element is
See also
- Cylindrical coordinate system#Line and volume elements
- Spherical coordinate system#Integration and differentiation in spherical coordinates | http://en.wikipedia.org/wiki/Differential_volume_element | 13 |
74 | Part 1: An Opening
the following scenario in small groups.
Imagine that one of your students comes to class
very excited. She tells you that she has figured out a theory that you
never told the class. She explains that she has discovered that as the
perimeter of a closed figure increases, the area also increases. She shows
you this picture to prove what she is doing:
- How would you respond to this student?
- Is this student’s “theory” valid? After
initial discussions in small groups, write a short journal entry about
your reaction to this student and a brief mathematical discussion
either validating or disconfirming this “theory.”
Exploring perimeter with a fixed area
- With 12 square tiles, arrange the tiles (flat on
a surface) so as to obtain the maximum perimeter. The tiles should
touch completely side-to-side, not by corners.
What is the maximum perimeter? What general observations did you make about how to make your
figure have a larger perimeter? Is there more than one figure that can
have the same maximum perimeter? Why or why not?
Restrict the figure to a solid rectangle. Find all possible rectangles with an area of 12 square tiles.
Record the respective dimensions and perimeter for the
rectangles. Which rectangle has the maximum perimeter? Why?
Given any fixed area for a rectangle, make a
general statement about which dimensions maximize the perimeter.
Exploring area with a fixed perimeter
Farmer Ted wants to build a rectangular pigpen
for his pigs. He has 24
meters of fencing left from his last project. What size rectangle should
he build so that the pigs have the maximum amount of play area?
Use square tiles to model different-sized solid
rectangular pigpens. Build
all possible rectangular pigpens that can be enclosed with 24 meters
of fencing. Record the
corresponding dimensions and areas for those pigpens. Which pigpen
gives the pig the most play area?
As the length of the rectangular pigpen changes,
how does that affect the width and area of the pigpen? What patterns
do you notice? Explore this relationship numerically and express this
algebraically. Use this relationship to explain why the maximum area
occurs with a square pigpen.
Given any amount of fencing, make a general
statement about the dimensions of the rectangular pigpen that give the
Exploring the problem with a spreadsheet
Open the Excel file called maxarea.xls.
(You will need to ENABLE MACROS when prompted.) This spreadsheet
allows the user to change the perimeter (P) and then vary the length
of side of X using the scroll bars.
As these values change, the yellow box will display the current
values of P, X, Y, and the Area (A) of the rectangle. The rectangle
and the data point (X, A) on the graph of the Length of side X vs.
Area will also change accordingly.
Fix the perimeter (P) to 24 as in the pigpen
problem. Then incrementally increase the length of side X from 1 to
12. Describe the changes that occur in the numerical, graphical, and
Click on the Show
All Points tab to display the next worksheet. What is the shape of
the graph of all the data points (X, A)? Why does this make sense?
Which Length of X gives the maximum value for A? Why?
How does this experience differ from the
manipulative experience of physically building each rectangle? Discuss
the benefits and drawbacks of the physical and spreadsheet approach to
the pigpen problem.
Exploring the relationship with algebra and calculus
If Farmer Ted had 158 meters of fencing, predict
what the dimensions would be of the rectangle with the maximum area.
Explain how you made this prediction. Create a rule as a function of P to obtain the length of side X
which gives the maximum area. Confirm this rule with several other examples.
Create a general equation for determining the
Area of a rectangle given both a fixed P and length of side X. How
does this equation relate to the shape of the graph in the
spreadsheet? With this Area equation, use the first derivative to
verify the value of X which gives the maximum value of A.
How does this compare to the rule you created above (a function
of P to obtain the length of side of X which gives the maximum area)?
Discuss the non-calculus and calculus based
approaches to making this generalization, including students’
accessibility, advantages, and disadvantages.
Why is the maximum value for Area achieved when
the first derivative is equal to zero? For a graphical exploration of
this, use the following java applet:
6: Revisiting the Opening Scenario
Revisit the scenario and subsequent group
discussions from Part 1. Now that you have explored the relationship
between perimeter and area at several different levels and with
various tools, how would you use the student’s “theory” to
provide relevant classroom experiences if you were teaching:
A sixth grade class
A Calculus course
Either discuss this in small groups or write an
individual journal entry that addresses how you would approach this
topic at each of the three levels, including justifications as to why
you believe your approach is pedagogically and mathematically
Consider the generalization you made in Part 2
when a rectangle has a fixed area and you want to maximize perimeter.
Create an interactive spreadsheet template that shows the numerical
and graphical representations of this problem. Use scroll bars to fix the value of A and change the values of
X, and create a graph of X vs. P.
View the following java applet:
Discuss the benefits and drawbacks of using the applet or the
spreadsheet template to explore the relationship between perimeter,
area, and the length of side X. | http://www.teacherlink.org/content/math/activities/ex-area/guide.html | 13 |
86 | This is the final lesson of the unit, aside from review and the test. The objective is for students to apply their understanding of graphical representations to be able to solve inequalities and equations.
After reviewing the homework, students will work on a Do Now worksheet with a graph of a parabola and some dotted horizontal and vertical lines. They are asked to use the graph and the lines to solve equations and inequalities.
Following this, there will be some direct instruction on solving graphically. The idea I want to present is:
To solve an equation or inequality graphically:
a) Write each side of the equation/inequality as a function
b) Graph the functions
c) Find the intersection point(s) (and draw dotted vertical “helper” lines)
d) Determine the part of the domain that solves the initial problem
We will review the absolute value problems they solved in the first unit (i.e. |x + 3| > 4), solving them graphically. I will also show them how to find points of intersection on the TI-83+. We will compare solving these graphically to solving them algebraically. Hopefully, students will see that a "less than" inequality makes a "sandwich" graph on the number line because the tip of the V is dipping below the horizontal line (does that make sense?). A "greater than" inequality makes a "gap" graph on the number line because the tops of the V go above the horizontal line to the left and right of the points of intersection.
After direct instruction, students will work independently to practice these concepts. The homework includes solving quadratic equations/inequalities in the same way. Students will also review for the third quiz (on this, plus translation and transformation of functions).
Monday, October 30, 2006
Thursday, October 26, 2006
Question: "What shape will the graph of y = |mx + b| always make?"
Answer: "It'll make a fucken V shape."
I suggested to the student that, on future quizzes, he pick a different adjective to describe his graphs.
I don't know where this came from - in all my years, I've never gotten a response like that. At least he was correct, though he needs to work on his spelling.
Wednesday, October 25, 2006
It's been a long time since I posted about the Numeracy class, mainly because I am not teaching it this year, so it's no longer at the front of my mind. But, this year's Numeracy teachers are struggling with the same concepts that I did. One of these killer concepts is rounding. This does not seem like a very difficult idea, but for our students, learning how to round a number is a huge challenge. We've tried all sorts of scaffolding, conceptual development, practice with the algorithm, and some kids just can't get it.
Ultimately, we think it comes down to a continued lack of understanding of the base-10 system. These students missed out on some very important mathematics in their first few years of school, and this is making everything else inordinately difficult for them. This week, one of the classes is piloting an activity where the students have the goal of collecting one million pennies. Each day, students will have to count how many pennies there are so far (pennies will initially be collected in a big jar). The idea is that students will eventually lose patience with this, and propose the idea of some sort of stacking or grouping. The teacher will then magically produce a container that has slots to divide the pennies into groups! When groups of 10 are no longer enough, then bigger groups of 100 will be used, and so forth.
Aside from learning more about the base-10 system, and the relative sizes of the different places, we hope that this activity will help students understand more about large numbers. We do an activity in the class where the whiteboard is divided into categories (thousands, millions, billions, trillions), and teams are given strips of paper with a quantity and a number, anad they must stick it to the board in the right column. For example, a strip might read "Number of people in San Jose (1)", or "The average income of a family of four (50)". It should come as no surprise that students usually have no idea where to put the strips of paper. In any case, this year's students seem to really doubt that there are about a million people in San Jose - they think the number is bigger by orders of magnitude. With the pennies activity, students will hopefully see that reaching a million is a bit harder than they think. But if they do reach a million, they will definitely earn their pizza party!
Sunday, October 22, 2006
My renewed enthusiasm for old childhood joys has come at an opportune time. In Tuesday's lesson, we will be working on transforming functions. I have a few transformers that have (incredibly) survived until now, which are currently gathering dust on my mantle. Hmm... do I bring in the opportunistic, irritating Starscream, the powerful, pea-brained Grimlock, or, everyone's favorite transforming boombox, Soundwave (though his head is broken off and I don't have any of his tape-minions).
In any case, my idea for this lesson is to have the students work on a scaffolded exploration for most of the class. They will work on learning transformations, as well as absolute value (of linear) functions. They can work individually, in pairs, or in groups, but each person must turn in their own paper by the end, and it will be graded like a quiz. The exploration/quiz is broken into 4 parts:
1) Creating absolute value functions.
In this part, students will graph a linear function. They will then graph the absolute value of the same line by looking at the y-coordinates of several points, and plotting their absolute values. They will have to then answer questions that help them see that y = |mx + b| will always make a V shape.
2) Translating absolute value functions.
In this part, students will review horizontal and vertical shifts from the previous lesson, but applied to absolute value functions.
3) Transforming absolute value functions.
In this part, students will plot out y = |x| and various transformations in the form y = a|x| to see what happens. By the end of this part, they should have a good sense for how the coefficient a affects the shape of the graph.
In the final part, students will put together what they know about translations and transformations to create the graph of f(x) = 2|x + 4| - 3. Then, they will generate a table and see if their ordered pairs fall on the graph that was created via the translation/transformation process.
After this, we will end the class with 15 - 20 minutes of direct instruction where I help formalize their understanding of transformations. I'm worried about running out of time for this. If I do, I can push it to the next class, and that will be ok, though it would be better in the same period. I hope that making the exploration into a quiz will help students focus and be more efficient with their time.
The lesson will be posted on ILoveMath.
Whoops.. I forgot that the entire sophomore class is out on a field trip to the Monterey Bay Aquarium for biology. My classes today have had about 4 people each. I got some good one-on-one time with my handful of juniors, and I'll just have to push things back to Thursday.
Thursday, October 19, 2006
Wednesday, October 18, 2006
Tomorrow, we will be leaving piecewise functions behind for a while. The students this year are doing better overall than last year, but some students still are having significant struggles with understanding piecewise functions. When asked to graph a single line in a restricted domain, that seems to be fine. But as soon as the pieces are put together, students get very confused between the f(x) value from the equation, and the x-values from the domain condition (I haven't graded the quiz yet, so I'll have some better information about this soon).
We'll come back to it when we review for the unit test and the final exam, but for now, we need to move on. The goal for the next lesson is for students to understand vertical and horizontal shifts - why they happen, how to determine the shift, and how to generate translated graphs.
For the Do Now, students will be asked to plot out y = x^2 and y = x^2 + 3, and compare the graphs. They will do the same for y = x^2 and y = (x - 2)^2. At this point, they are working by hand so that they can, point by point, see what is happening. We will discuss their results and get some initial conjectures out. Then, they will do a graphing calculator exploration of the same ideas, which will allow them to graph more functions more quickly.
After this, we'll put the ideas together as I do some direct instruction, and define the concepts of vertical and horizontal shift. We will use what they've learned to understand why the vertex of y = a(x - h)^2 + k is (h, k) - which is, after all, a state standard! Yes! I hope that providing more scaffolding on translations will help them understand the vertex form better. I also think it will help them be more prepared for all those crazy phase-shifting, period changing trig functions they will encounter in pre-calc (along with the upcoming lesson, which will look at stretching transformations, in general and with specific attention to absolute value functions).
I graded quiz 2 and the scores were lower than I'd hoped:
9| 0 3 3
8| 0 3 3 3 5 8 8
7| 1 3 3 6 6 8
6| 3 6 8
5| 1 4 6
There were too many students who crashed on this one, although the bulk of the class was still in a good range. Many students had trouble with inequality notation - after working so heavily on interval notation, they seem to have forgotten how to use inequality notation, which is something they were already familiar with. For example, to express the numbers less than 3, we write (-infinity, 3) . Several students then wrote something like -infinity < x < 3 instead of just x < 3. It is always very interesting to me how new knowledge seems to crowd out old knowledge for a while, and then there is a process of assimilation where the mind brings them together and eventually sorts it out. The piecewise question was decent overall, although there are quite a few students who have not mastered it yet. I think that it needs some time to marinate in their brains, and we'll review these problems when we get toward the unit test. I think I'll develop a good error-checking handout for them to work on.
Monday, October 16, 2006
In the next class, we will start by reviewing the homework in a different manner than normal. We'll do the more traditional "I'll put the answers on the overhead, and we will go through it together to see if there are any questions" instead of my new homework review process . This is because the homework is long and has lots of short questions, and it is the review sheet for the day's quiz. I hope that reviewing the work this way today will be effective.
After the review, we will have a team challenge that is worth points on the quiz - not extra credit, but the normal, quotidian kind - 9 out of the 50 points. Teams will be given three 3-point problems, one at a time. They can work in their teams, use notes, and ask me a total of 3 questions for the whole activity. When they agree to a solution, they all sign off on that sheet and turn it in to me, getting the next problem in return. If anyone talks to a different team, they will lose 1 point from their score. I think that these rules will help keep them focused on their work and helping their teammates.
Problem 1: Given a piecewise function, make a graph
Problem 2: Given a graph of a piecewise function, determine the equations. (My hope is that they can work this out on their own, though they haven't gone in the backwards direction yet. They can use one of their 3 questions here, of course)
Problem 3: Given a graph of two lines that are not crossing, but clearly will cross, determine the exact point of intersection. (To do this, they have to determine the function of each line, and then solve the system algebraically. I think there will be a lot of confusion here, but if they ask me a good question, they should be able to proceed).
I haven't tried something like this before, so we'll see how it goes.
We'll then take a few minutes where I show them the answers and deal with any last minute questions. The class will then have the last 30 minutes to take the quiz.
The homework will focus on determining the functions of given piecewise graphs, along with review of solving absolute value inequalities and equations in preparation for the upcoming lessons in which we will solve these same problems graphically.
Thursday, October 12, 2006
This lesson will start out with students warming up on a piecewise function with only linear parts, as in the homework. The graph is followed by a series of evaluation problems. The first can be done by looking at the graph - the rest require use of the equations. This will be explained later in the class; for now, the idea is to see if students can figure out how to do it, either alone or by working together.
After reviewing this, we will add to the notes from the previous lesson. I'll show the students how to evaluate a piecewise function algebraically by determining which condition is made true by the x-value, and then substituting the x-value into the corresponding equation. We can use this idea to go back to the Do Now problems if they didn't figure them out. I will set up an x-y table to record the values we test during this portion of the lesson - next, we'll use tables to graph more complicated functions.
At this point, I'll hand out a reading to the students that shows how to use a table to help graph a piecewise function that has non-linear pieces (like absolute values, quadratics, etc.). My idea is that they will write the x-values that are the endpoints of the conditions twice in the table, and evaluate them with each of the equation parts. They will indicate on the table if the y-value is a closed or open circle. Students will then have time to practice this independently.
Homework is a set of problems that will help them review for the second quiz in the next class.
Wednesday, October 11, 2006
Graphing a piecewise function is one of those topics which initially doesn't seem that challenging, but I've found that students get very confused. They tend to flip back and forth between x and f(x) in their minds, and their graphs often come out totally funky. Last year, I had to backtrack and re-scaffold my lessons when I saw how tough it was. This year, I think I have a better plan.
In this lesson, after reviewing homework and warming up with more graphical analysis (see previous post), students will do more practice graphing a simple slope-intercept linear function in a restricted domain. They will do these one at a time, so they can focus on the relationship between the graph (i.e. the y-values) and the interval they are graphing on (i.e. the x-values).
Once they've got that down, I'll do some direct instruction where I show them what a piecewise function looks like, and how it can be thought of as simply putting together the restricted domain lines they were just doing into one graph. Today, we will only look at piecewise functions whose parts are constant or linear. We will go through one example together as a class, and then they will try to do one on their own.
The homework will ask them to graph two more piecewise functions, and then to do another graphical analysis problem. On that problem, they are given two linear functions f(x) and g(x), and asked to find things like f(2) * g(2) and g(4) / f(4). This is early preparation for the quadratic and rational functions units, where I ask students to graph a pair of linear functions and then point-by-point multiply (parabola) or divide (hyperbola) the y-values to create the new function. As a bonus on this homework, I am asking them to make a graph of |f(x)| and |g(x)| to see if they can connect what they know about absolute value and graphical representations to generate the v-shaped graph on their own.
Monday, October 09, 2006
Tomorrow, we'll start class by reviewing the homework (as always), and then I get to give students back their quiz and congratulate them on doing so well. That's always more fun for everyone than when they do poorly.
Next, students will work individually on more graphical analysis work, as in the last lesson. I'm convinced that fluidity with graphical representation is a key skill that must be continually reinforced. When I first started at DCP, I taught primarily Algebra 1. I never understood why students could graph a system of equations and find the intersection point, solve a system algebraically, but not make the connection between the point of intersection and the algebraic solution. It took me a while to see that they did not really understand the relationship between "y = 2x + 3" and the graph that they knew how to produce. Without that understanding, it becomes clear why a point of intersection seems to have nothing to do with algebra.
Anyway, following this, there will be some direct instruction where I show students a series of graphs, and help them understand how to determine the domain and range, and write them in interval and inequality notation.
Finally, there will be time for pair work practice where students will determine the domain and range of given graphs. They will also have to graph a linear function over a restricted domain. Understanding how to do this will make the upcoming piecewise functions lesson much more successful.
The homework will be practice of these concepts, with some review problems thrown in. I've been thinking about including function decomposition in the curriculum - I decided to put in some problems on this homework (shown above) where they must match a composite function with a pair of component functions as a sort of first step in that direction. We'll see how they do with it.
I've compiled six graphs with related questions that push on students' understanding of how a graph represents a function. I've uploaded them to ILoveMath in the Algebra: Misc Topics section. If you use any of my stuff, I'd love to get any feedback.
Sunday, October 08, 2006
Ok, this was actually last Friday's lesson..
We had the first quiz of the second unit. Students did well - the average score was a 79%. After this, we began exploring graphical analysis. The example shown here is part of the homework. As predicted, answering these types of questions was hard for many of the students, but we did seem to make some good progress. I didn't give them any direct instruction - just asked them to try to figure out the questions in their groups based on what they already know. Then, we reviewed the answers together. We will keep coming back to these types of problems in the upcoming lessons.
After this, I did direct instruction on adding, subtracting, and multiplying functions. Students liked this, because it only required use of their algebra 1 skills. Though the notation was different, they didn't have to learn anything "new"; the feeling of relief was almost palpable in the room.
In a comment in the previous post, Darren says:
I find the Algebra 2 standards to be fairly "aggressive", like drinking water from a fire hose. How you find time to add in topics that are not in the standards, I don't know but would be most interested in learning.
(I think it is worth starting a new post to answer this... I'm interested, as always, to hear people's thoughts.)
Well, you're right. We are not able to adequately cover all the standards as they are written, especially since our students start so far behind the curve. What we've done is to strand out the standards over the 4 years. We leave some things out of Algebra 1, and push them into Algebra 2. Some of the Algebra 2 standards we then move into Pre-Calc. And some, we decide not to do at all.
When deciding what to cover (and in what depth) in Algebra 2 Honors, I look not just at the list of state standards, but at the blueprint for the standardized test (i.e. what are the key standards that comprise the bulk of the test), at what will be covered in Geometry and Pre-Calc, and at what will be most useful for students moving in a trajectory toward Calculus.
I know we can't do it all, so my goal is to balance the required coverage with a strong scaffolding for success in Calculus - all with an eye to who our target student is.
For example, though I think conic sections is a great topic, a thorough unit would take my students many lessons to master - yet there is only one single question on the test. So we leave it for Pre-Calc.
The same is true for combinations, permutations, probability and stats, mathematical induction, series - all together these topics comprise just about 20% of the test. I choose to focus on the rest, and go deeper by including relevant scaffolding, math analysis components, word problems, and so forth.
With our students, I believe this actually yields higher test scores than covering everything more shallowly would.
There is discussion of this going on at Darren's site if you are interested.
Tuesday, October 03, 2006
The composition lesson went fairly well, though I could have used about 5 more minutes to finish the lecture. Students seemed comfortable with finding things like f(g(2)), but I definitely lost many of them when I tried to finish with finding f(g(x)) as an expression. We'll definitely need to review this a couple of times.
In the next lesson, after reviewing the homework, the Do Now will focus on practicing these concepts (as well as reviewing absolute value inequalities).
Then, I'm going to squeeze in a mini-lesson on using the TI-83+ to graph inequalities and absolute value inequalities. In this case, I mean graphing things like y = (x < 5), where it returns 1 if true and 0 if false. This is a pretty cool way to generate a graph that looks like the number lines we shade by hand. It can solve absolute value problems the same way: y = (abs(2x+1) > 3). After learning this technique, students will check their answers from the Do Now by graphing. I'm hoping that this doesn't take too much time...
Finally, I will give students a handout on Representational Fluency. I've learned from my experiences teaching AP Calculus in previous years how important it is for students to be able to move comfortably between equations, graphs, tables, verbal descriptions, arrow mappings, etc - especially when it comes to the concept of functions. This sheet focuses on graph and table representations of functions - students have to figure out things like f(2) and f(g(-2)) from these representations. I'll post this on ILoveMath .
I'm trying to decide if I should do some work on function decomposition. This is clearly a skill that students will need for Calculus (i.e. working with the chain rule). I also think that decomposing functions might help them understand better what composing functions really means. But I'm also worried about overloading them, and I wonder if they need more time to digest function notation and composition first. It's not in the standards, as far as I can tell, so I wonder if students are expected to understand this idea before getting to Calculus. Any ideas?
Sunday, October 01, 2006
Up next, students will learn to evaluate composite functions. First, they will do a worksheet with function notation practice problems. I set the problems up in groups of 4 as follows: f(2), f(-4), f(a), f(2x - 5). I think that sneaking up this way on the idea of plugging in an expression for x will help students better understand how to evaluate f(g(x)) as an expression. I remember having a lot of difficulty when I first learned this concept, and this method helps make it clearer for me anyway...
Then, we'll use this dual lens model. I hope it will help them visualize what "the output of f is the input of g" means. After the model, we'll go through the concept of composing functions, and do some example problems together.
In an upcoming class, I will give students a chance to do function composition when given graphs or tables instead of equations. | http://exponentialcurve.blogspot.jp/2006_10_01_archive.html | 13 |
73 | An arc is a segment of the perimeter of a given circle. The measure of an arc is measured as an angle, this could be in radians or degrees (more on radians later). The exact measure of the arc is determined by the measure of the angle formed when a line is drawn from the center of the circle to each end point. As an example the circle below has an arc cut out of it with a measure of 30 degrees.
As I mentioned before an arc can be measured in degrees or radians. A radian is merely a different method for measuring an angle. If we take a unit circle (which has a radius of 1 unit), then if we take an arc with the length equal to 1 unit, and draw line from each endpoint to the center of the circle the angle formed is equal to 1 radian. this concept is displayed below, in this circle an arc has been cut off by an angle of 1 radian, and therefore the length of the arc is equal to because the radius is 1.
From this definition we can say that on the unit circle a single radian is equal to radians because the perimeter of a unit circle is equal to . Another useful property of this definition that will be extremely useful to anyone who studies arcs is that the length of an arc is equal to its measure in radians multiplied by the radius of the circle.
Converting to and from radians is a fairly simple process. 2 facts are required to do so, first a circle is equal to 360 degrees, and it is also equal to . using these 2 facts we can form the following formula:
, thus 1 degree is equal to radians.
From here we can simply multiply by the number of degrees to convert to radians. for example if we have 20 degrees and want to convert to radians then we proceed as follows:
The same sort of argument can be used to show the formula for getting 1 radian.
, thus 1 radian is equal to
- Chapter 2. Geometry/Angles
- Chapter 3. Geometry/Properties
- Chapter 4. Geometry/Inductive and Deductive Reasoning
- Chapter 5. Geometry/Proof
- Chapter 6. Geometry/Five Postulates of Euclidean Geometry
- Chapter 7. Geometry/Vertical Angles
- Chapter 8. Geometry/Parallel and Perpendicular Lines and Planes
- Chapter 9. Geometry/Congruency and Similarity
- Chapter 10. Geometry/Congruent Triangles
- Chapter 11. Geometry/Similar Triangles
- Chapter 12. Geometry/Quadrilaterals
- Chapter 13. Geometry/Parallelograms
- Chapter 14. Geometry/Trapezoids
- Chapter 15. Geometry/Circles/Radii, Chords and Diameters
- Chapter 16. Geometry/Circles/Arcs
- Chapter 17. Geometry/Circles/Tangents and Secants
- Chapter 18. Geometry/Circles/Sectors
- Appendix A. Geometry/Postulates & Definitions
- Appendix B. Geometry/The SMSG Postulates for Euclidean Geometry
- Part II- Coordinate Geometry:
- Two and Three-Dimensional Geometry and Other Geometric Figures
- Geometry/Perimeter and Arclength
- Geometry/Right Triangles and Pythagorean Theorem
- Geometry/2-Dimensional Functions
- Geometry/3-Dimensional Functions
- Geometry/Area Shapes Extended into 3rd Dimension
- Geometry/Area Shapes Extended into 3rd Dimension Linearly to a Line or Point
- Geometry/Ellipsoids and Spheres
- Geometry/Coordinate Systems (currently incorrectly linked to Astronomy)
- Traditional Geometry:
- Modern geometry | http://en.wikibooks.org/wiki/Geometry/Circles/Arcs | 13 |
83 | The Big Bang
Instructor/speaker: Prof. Walter Lewin
Today I want to talk with you about Doppler effect, and I will start with the Doppler effect of sound which many of you perhaps remember from your high school physics.
If a source of sound moves towards you or if you move towards a source of sound, you hear an increase in the pitch.
And if you move away from each other you hear a decrease of a pitch.
Let this be the transmitter of sounds and this is the receiver of sound, it could be you, your ears.
And suppose this is the velocity of the transmitter and this is the velocity of the receiver.
And V should be larger than 0 if the velocity is in the direction.
And in the equations what follow, smaller than zero it is in this direction.
The frequency that the receiver will experience, will hear if you like that word, that frequency I call F prime.
And F is the frequency as it is transmitted by the transmitter.
And that F prime is F times the speed of sound minus V receiver divided by the speed of sound minus V of the transmitter.
So this is known as the Doppler shift equation.
If you have volume one of Giancoli you can look it up there as well.
Suppose you are not moving at all.
You are sitting still.
So V receiver is 0.
But I move towards you with 1 meter per second.
If I move towards you then F prime will be larger than F.
If I move away from you with 1 meter per second then F prime will be smaller than F.
The speed of sound is 340 meters per second.
So if F, which is the frequency that I will produce, is 4000 hertz, then if I move to you with 1 meter per second, which I'm going to try to do, then the frequency that you will experience is about 4012 hertz.
It's up by 0.3 percent.
Which is that ratio one divided by 340.
And if I move away from you with 1 meter per second, then the frequency that you will hear is about 12 hertz lower.
So you hear a lower pitch.
About 0.3 percent lower.
I have here a tuning fork.
Tuning fork is 4000 hertz.
I will bang it and I will try to move my hand towards you one meter per second roughly.
That's what I calculated it roughly is.
Move it away from you, towards you, away from you, as long as the sound lasts.
You will hear the pitch change from 4012 to 3988.
Have you heard it?
Who has heard clearly the Doppler shift, raise your hands, please?
Chee chee chee chee it's very clear.
Increased fre- frequency and then when I move my hands, away a lower pitch.
Now you may think that it makes no difference whether I move towards you or whether you move towards me.
And that is indeed true if the speeds are very small compared to the speed of sound.
But it is not true anymore when we approach the speed of sound.
As an example, if you move away from me with the speed of sound, you will never hear me.
Because the sound will never catch up with you, and so F prime is 0.
And you can indeed confirm that with this equation.
But if I moved away from you with the speed of sound, for sure the sound will reach with you.
And the frequency that you will hear is only half of the one that I produce.
So there's a huge asymmetry.
Big difference whether I move or whether you move.
So I now want to turn towards electromagnetic radiation.
There is also a Doppler shift in electromagnetic radiation.
If you see a traffic light red and you approach it with high enough speed you will experience a higher frequency and then you will see the wavelengths shorter than red and you may even think it's green.
You may even go through that traffic light.
To calculate the proper relation between F prime and F requires special relativity.
And so I will give you the final result.
F prime is the one that you receive.
F is the one that is emitted by the transmitter.
And we get here then 1 - beta divided by 1 + beta to the power one-half.
And beta is V over C, C being the speed of light, and V being the s- speed, the relative speed between the transmitter and you.
If beta is larger than 0, you are receding from each other in this equation.
If beta is smaller than 0, you are approaching each other.
You may wonder why we don't make a distinction now between the transmitter on the one hand, the velocity, and the receiver on the other hand.
There's only one beta.
Well, that is typical for special relativity.
What counts is only relative motion.
There is no such thing as absolute motion.
The question are you moving relative to me or I relative to you is an illegal question in special relativity.
What counts is only relative motion.
If we are in vacuum, then lambda = C / F and so lambda prime = C / F prime.
Lambda prime is now the wavelength that you receive and lambda is the wavelength that was emitted by the -- by the source.
So I can substitute in here, in this F, C / lambda which is more commonly done.
So this Doppler shift equation for electromagnetic radiation is more common given in terms of lambda.
But of course the two are identical.
And then you get now 1+ beta upstairs divided by 1- beta to the power one-half.
The velocity, there if I'm completely honest with you, is the radial velocity.
If you are here and here is the source of emission and if the relative velocity between the two of you were this, then it is this component, this angle is theta, this component which is V cosine theta, which we call the radial velocity, that is really the velocity which is in that equation.
Police cars measure your speed with radar.
They reflect the radar off your car and they measure the change in frequency as the radar is reflected.
That gives a Doppler shift because of your speed and that's the way they determine the speed of your car to a very high degree of accuracy.
You can imagine that in astronomy Doppler shift plays a key role.
Because we can measure the radial velocities of stars relative to us.
Most stellar spectra show discrete frequencies, discrete wavelength, which result from atoms and molecules in the atmosphere of the stars.
Last lecture I showed you with your own gratings a neon light source and I convinced you that there were discrete frequencies and discrete wavelengths emitted by the neon.
If a particular discrete wavelength, for instance in our own laboratory, would be 5000 Angstrom, I look at the star, and I see that that wavelength is longer, lambda prime is larger than lambda, then I conclude -- lambda prime is larger than lambda, that means the wavelength the way I observe it is shifted towards longer wavelength, is shifted in the direction of the red, and we call that redshift.
It means that we are receding from each other.
If however I measure lambda prime to be smaller than lambda, so lambda prime smaller than lambda, we call that blueshift in astronomy, and it means that we are approaching each other.
And so we make reference to the direction in the spectrum where the lines are moving.
I can give you a simple example.
I looked up for the star Delta Leporis what the redshift is.
There is a line that most stars show in their spectrum which is due to calcium, it even has a particular name, I think it's called the calcium K line, but that's not so important, the name.
In our own laboratory, lambda is known to a high degree of accuracy, is 3933.664 Angstroms.
We look at the star and we recognize without a doubt that that's due to calcium in the atmosphere of the star and we find that lambda prime is 1.298 Angstroms higher than lambda.
So lambda prime is larger than lambda.
So there is redshift and so we are receding from each other.
I go to that equation.
I substitute lambda prime and lambda in there and I find that beta equals +3.3 times 10 to the -4.
The + for beta indeed confirms that we are receding, that our relative velocity is away from each other, and I find therefore that the radial velocity -- I stress it is the radial component of our velocity is then beta times C and that turns out to be approximately 99 kilometers per second.
So I have measured now the relative velocity, radial velocity, between the star and me, and the question whether the star is moving away from me or I move away from the star is an irrelevant question, it is always the relative velocity that matters.
How can I measure the wavelength shifts so accurately that we can see the difference of 1.3 angstroms out of 4000?
The way that it's done is that you observe the starlight and you make a spectrum and at the same time you make a spectrum of light sources in the laboratory with well-known and well-calibrated wavelength.
Suppose there were some neon in the atmosphere of a star.
Then you could compare the neon light the way we looked at it last lecture.
You could compare it with the wavelength that you see from the star and you can see very, very small shifts.
You make a relative measurement.
So you need spectrometers with very high spectral resolution.
So there was a big industry in the early twentieth century to measure these relative velocities of stars.
And their speeds were typically 100, 200 kilometers per second.
Not unlike the star that I just calculated for you.
Some of those stars relative to us are approaching.
Other stars are receding in our galaxy.
But it was Slipher in the 1920s who observed the redshift of some nebulae which were believed at the time to be in our own galaxy and he found that they were -- had a very high velocity of up to 1500 kilometers per second, and they were always moving away from us.
And it was found shortly after that that these nebulae were not in our own galaxy but that they were galaxies in their own right.
So they were collections of about 10 billion stars just like our own galaxy.
And so when you take a spectrum of those galaxies, then of course you get the average of millions and millions of stars, but that still would allow you then to calculate the redshift, the average red shift, of the galaxy, and therefore its velocity.
And Hubble, the famous astronomer after which the Hubble space telescope is named, and Humason made a very courageous attempt to measure also the distance to these galaxies.
They knew the velocities.
That was easy because they knew the redshifts.
The distance determinations in astronomy is a can of worms.
And I will spare you the details about the distance determinations.
But Hubble made a spectacular discovery.
He found a linear relation between the velocity and the distances.
And we know this as Hubble's law.
And Hubble's law is that the velocity is a constant which is now named after Hubble, capital H, times D.
And the modern value for H, the modern value for H is 72 kilometers per second per megaparsec.
What is a megaparsec?
A megaparsec is a distance.
In astronomy we don't deal with inches, we don't deal with kilometers, that is just not big enough, we deal with parsecs and megaparsecs.
And one megaparsec is 3.26 times 10 to the 6 light-years.
And if you want that in kilometers, it's not unreasonable question, it's about 3.1 times 10 to the 19 kilometers.
So I could calculate for a specific galaxy that I have in mind, I can calculate the distance if I know the red shift.
I have a particular galaxy in mind for which lambda prime -- for which lambda prime is 1.0033 times lambda.
So notice again that the wavelength that I receive is indeed longer than lambda, so there is a redshift.
I go to my Doppler shift equation which is this one.
I calculate beta.
One equation with one unknown, can solve for beta.
And I find now that V is 5000 kilometers per second.
Very straightforward, nothing special, very easy calculation.
But now with Hubble's law I can calculate what D is.
Because D now is the velocity which is 5000 kilometers per second divided by that 72 and that then is approximately 69 megaparsec.
Again we have the distance if we do it in these units in megaparsecs.
That's about 225 million light-years.
And so the object is about 225 million light-years away from us.
So it took the light 225 million years to reach us.
So when you see light from this object you're looking back in time.
And if you have a galaxy which is twice as far away as this one, then the velocity would be twice as high.
And they're always receding relative to us.
I'd like to show you now some spectra of three galaxies.
Can I have the first slide, John?
All right, you see here a galaxy and here you see the spectrum of that galaxy.
That may not be very impressive to you.
The lines that are being recognized to be due to calcium K and calcium H are these two dark lines.
Some of you may not even be able to see them.
And this is the comparison spectra taken in the laboratory.
These lines are seen as dark lines, not as bright lines.
We call them absorption lines.
They are formed in the atmosphere of the star.
Why they show up as dark lines and not as bright lines is not important now.
I don't want to go into that.
That's too much astronomy.
But they are lines and that's what counts.
And these lines are shifted towards the red part of the spectrum by a teeny weeny little bit.
You see here this little arrow.
And the conclusion then is that in this case the velocity of that galaxy is t- 720 miles per second which translates into 1150 kilometers per second, and so that brings this object if you believe the modern value for Hubble constant at about 16 megaparsec.
This galaxy is substantially farther away.
No surprise that it therefore also looks smaller in size, and notice that here the lines have shifted.
These lines have shifted substantially further.
And if I did my homework, using the velocity that they claim, which they can do with high degree of accuracy because you can calculate lambda prime divided by lambda, those measurements can be made with enorm- accuracy, I find that this object is about 305 megaparsecs away from us, so that's about 20 times further away than this object.
So the speed is also about 20 times higher of course because there's a linear relationship.
And if you look at this one which is even further away, then notice that these lines have shifted even more.
The next slide shows you what I would call Hubble diagram.
It was kindly sent to me by Wendy Freedman and her coworkers.
Wendy is the leader of a large team of scientists who are making observations with the Hubble space telescope.
You see here distance and you see here velocity in the units that we used in class, kilometers per second.
Forget this part.
That's not so important.
But you see the incredible linear relationship.
And Wendy concluded that Hubble's constant is around 72.
It could be a little lower, it could be a little higher.
She goes out all the way to 400 megaparsecs with associated velocities of about 26000 kilometers per second.
That's about 9% of the speed of light.
So beta is about one-tenth.
So for this object lambda prime / lambda would be about 1.1.
With a 10% shift in the wavelength.
Hubble, who published his data in the twenties, his whole data set, when he concluded that there was a linear relation, had only objects with velocities less than 1100 kilometers per second.
And 1100 kilometers per second is this point here.
So Hubble had only points -- there are not even any in Wendy's diagram, which are here.
And he concluded courageously that there was this linear relationship.
And you see it has stood the acid test.
We still believe it is linear.
The only difference was that Hubble's distances were very different from what we believe today.
They were about 7 times smaller.
So Hubble constant was different for him but the linear relationship was there.
OK, that's enough for this slide.
So now comes a 64 dollar question, why do all galaxies which are far away, why w- do they move away from us?
Well, I can suggest a very simple picture to you.
We are at the center of the universe and there was a huge explosion a long time ago.
We refer to that explosion as the Big Bang.
And since we are at the center where the explosion occurred, the galaxies which obtained the largest speed in the explosion are now the farthest away from us.
Now assume that this explosion is the correct idea.
Assume that there was a Big Bang.
Then I can ask the question now when did it occur?
I can now turn the clock back and I can do the following.
I can take two objects which are a distance D apart today but they were together when the universe was born at the Big Bang.
And let's assume that they have been going away from each other always with the same velocity.
Let's assume that now for simplicity.
So if they always went away with the same velocity from each other then the distance that they are now today is their velocity times the time T which is then the age of the universe.
But we also know with Hubble's law that the velocity V is H times D.
And we assume that these velocities are the same now for simplicity.
You multiply these two equations with each other and you find immediately that the age of the universe is one over H.
And that indeed has the unit of time.
If you take H, the one that we believe in nowadays, and you calculate 1/H, and you work in MKS units, you'll find that TH is about 14 billion years, I'll first give it to you in seconds, it's about 4.3 times 10 to the 17 seconds.
And that is about 14 billion years.
So with this picture in mind, the universe would be about 14 billion years old, but because of the gravitational attraction of these galaxies, they attract each other, you may expect that the speed of the galaxies was larger in the past, and therefore the speed that we have -- we assume that the speed doesn't change is not quite accurate, and so maybe the universe is a little younger, maybe 12 billion years or so.
We know from theoretical calculations that the oldest stars in our own galaxy are about 10 billion years old.
Therefore the universe cannot be younger than 10 billion years.
And there is general consensus in the community that our universe is probably 12 to 14 billion years old.
Now the whole issue of this deceleration that I mentioned as the galaxies moved away from each other is at the heart of research in cosmology.
And in fact it is now believed that very early on in the universe there was first acceleration followed by deceleration, and maybe again acceleration.
That is quite mysterious.
Frontier research is going on in this area.
At MIT we have three world experts, Professors Alan Guth, w- made major contributions to this concept, cosmology, we have Ed Bertschinger and we have Scott Burles.
If we take Hubble's law at face value, I can calculate how far the edge of our visible universe is.
Which is the horizon.
We call that the horizon.
I can calculate what the maximum distance is that we can look.
D maximum can be found by making the velocity C.
So that the galaxies are moving -- we are moving away from the galaxies, the galaxies are moving away from us -- with the speed of light.
And so you would find then that D max is C / H.
That is a distance.
And you will find then, no surprise, if you use the modern number, that that distance is 14 billion light-years.
We can never see beyond that.
Because if V = C then beta becomes 1, and if beta becomes 1, lambda prime becomes infinitely large, you have an infinite amount of redshift, and F prime becomes 0.
So the electromagnetic radiation has no frequency anymore and so there's no energy anymore in the photons.
So that is then the edge of our universe, of our visible universe.
You can never see beyond that.
So now comes a reasonable question.
How far have we been able to see into the universe?
And to my knowledge the record holder is a galaxy for which lambda prime / lambda is 7.56.
Was published only two months ago.
Now at such very large values of redshift, general relativity becomes very important.
And the equation that we derived here was derived for special relativity.
And so with very high values of red shift like lambda prime / lambda 7.56 you cannot reliably calculate the velocities using that equation.
And so you cannot use that velocity then and shove it into Hubble's law and find the -- the distance.
But there is no question that the -- that object is probably at a distance of something like 13 billion light-years.
Very very far away from us, near the edge of our universe.
I will show you an object that is also believed to be near the edge of the universe.
It comes up in the next slide.
The distance is roughly 12 billion light-years.
So for one, when you look at that object, there it is -- it doesn't look very impressive but what do you expect from an object that is 12 billion light-years away from us?
It's a quasar, which is a very peculiar galaxy.
It emits emission lines, the spectra do not show these dark lines that I showed you earlier, but they actually have emission lines, and the light that you see here was emitted some 12 billion years ago.
And now comes the spectrum from this object in the next slide.
This was published last year by Scott Anderson and his coworkers, University of Washington in Seattle.
I have collaborated with Scott on many projects.
So here you see the spectrum of that quasar that you just saw.
And here you see a line, an emission line, at roughly 7 -- 7800 Angstroms.
And they are all reasons to believe that this in the frame of reference of that quasar -- was the Lyman alpha line which is emitted by hydrogen, which is 1216 Angstroms.
Now we have here 5000, 4000, 3000, 2000, 1000, so here is roughly where the wavelength lambda is, and here is lambda prime.
Lambda prime is 6.41 times larger than lambda.
He mentions 5.41, but Z is what astronomers in general quote, is lambda prime /lambda - 1, so the ratio lambda prime over lambda is 6.41.
Absolutely amazing that you can make such accurate measurements, such incredible beautiful data, and this line is all the way in the infrared, you cannot see this with your naked eye anymore, our eyes I think can only see up to 6500.
So the 1216 line was in the UV, shifts all the way into the infrared, and this allows astronomers then to measure the value lambda prime / lambda, and there is little doubt that this object is also near the edge of our visible universe.
That's enough, John, thank you.
I'd like to return to the Big Bang, to the explosion some 12 or 15 billion years ago.
And I'd like to raise the question, are we at the center of that explosion?
Are we really at the center of our universe?
That cannot be of course.
It's an incredible arrogance.
It would be too egocentric.
I know that we all think very highly of ourselves, but this cannot be.
We are nothing in the framework of the total universe.
We cannot possibly be at the center.
So how do we reconcile this now with what we observe?
Imagine that you were a raisin in a raisin bread.
Quite a promotion, from a human being to a raisin in a raisin bread.
And I put you in an oven.
And the raisin bread dough is going to expand.
All raisins will see other raisins move away from each other and the larger the distance to your raisins the larger the speed will be.
And each raisin will think that they are very special.
Suppose here this is you, one raisin, and here's another raisin, and here's another raisin.
After a certain amount of time all distances have doubled.
So this one is here.
And this one is here.
So you can immediately see that when you look at this one, that its velocity is substantially lower than that one.
This is twice as far away, you will see twice as high a speed.
But this raisin will look at this one.
And it will also conclude that this raisin relative to this one has a higher velocity than this raisin has relative to this one.
So all of them will think that they are special and you as a raisin would come up with Hubble's law.
You would conclude that the velocity of your other raisins are linearly proportional to the distance.
There is an analogy which is even nicer than raisin bread, and that analogy is with Flatlanders.
A Flatlander is someone who lives on a two-dimensional world.
He happens to live on the surface of a balloon.
And light travels only along the surface of the balloon.
So the two-dimensional world is curved in the third dimension, but the Flatlanders cannot see in the third dimension.
They can only see the second dimension.
So here you have such a world.
So here are the galaxies.
And the universe is curved in the third dimension which these Flatlanders cannot see.
And when you blow this balloon up, the galaxies move away from each other, and the farther the galaxies are away from each other, the higher the velocity.
This model works actually quite well and I want to pursue that in my next calculations.
Let me first try to bring this universe to a halt.
Because I don't want the universe to collapse again.
So you can pursue this idea very nicely and you can see that the Flatlanders would draw quite amazing conclusions.
Here is that balloon.
The balloon has a radius R.
Here is one galaxy.
And here is another galaxy.
And they are a distance S apart.
I will call that later D.
But now I want to call it S.
You will see why.
A little later in time, the universe has expanded, this galaxy is here and this galaxy is here.
And this distance now is R + dR and so this distance now between the two galaxies is S + dS.
And it follows immediately from the geometry that S + dS / S is R + dR / R.
Simple high school geometry.
I can work this out.
I get S R + R dS is SR + S dR.
I lose this SR.
I divide by dT.
dS/dT is the velocity with which these two galaxies move away from each other.
That's they- what they would measure in their universe.
So there is a V here.
It's clear that S is the distance between them.
I will call that D again now.
So that is D.
And then I have 1/R times dR/dT.
1/R I will write this a little higher.
dR / R.
No, no, no, we had dR/dT.
So now I have 1/R dR/dT.
And look at this.
I have V = D times something.
And that something at a given moment in time has a unique value.
R of the balloon has a unique value.
And dR/dT which is the expansion velocity also has a unique value.
And so it's immediately obvious that in this universe this is Hubble's constant.
And this Hubble's constant is a function of time.
It is changing with time.
And it's obvious that it should change in time.
No reason why it shouldn't do the same in our own galaxy.
Because R in the past was much smaller.
So even if you take an expansion velocity which is constant, if R is smaller in the past, then H was larger in the past.
And that is the reason why if you ever see a quote of H to be 72 kilometers per second per megaparsec, there's always a little 0 here.
And the 0 means now.
The 0 means not a billion years from now and not a billion years ago.
We really don't know what it was a billion years ago.
Now don't get -- don't carry this analogy between the 2-D balloon and the -- our own universe too far.
But it gives some interesting insights.
It is suggestive of the idea that our own three-dimensional space may be curved in the fourth dimension that we cannot see.
This is very fascinating and I would advise you if you are interested in this area that you take a course in cosmology.
You should also take one in general relativity.
It will open a whole new world for you.
And both Allen Guth and Ed Bertschinger and also Scott Burles are the experts in this area and they happen to be one of our best teachers.
So you can't lose there.
Now comes a key question and that is, will our universe expand forever?
If the universe expands forever, we call that an open universe, that's just a name.
It's also possible that our universe will come to a halt.
That means that H, Hubble's constant, will become 0, that everything will stand still, no relative motion anymore, which then will be followed by collapse.
And so all the redshifts will then come to 0 and will turn to blueshifts.
It's the same idea, the same question, when you throw up an apple, will the apple come back or will the apple not come back.
It depends on the speed of the apple and on the gravitational field of the earth, and we all know that if you throw it fast enough, about 11 kilometers per second in the absence of atmosphere the apple would never come back.
Now if only gravity played the key role in our universe, then we can do a very simple calculation.
And the answer to whether or not our universe is open or closed would then depend on the average density of the universe.
And when I say average density then you have to think in terms of a big scale.
You don't think in terms of Cambridge.
That's not representative for the average density of the universe.
Nor is our solar system.
Nor is our galaxy.
But you have to think probably on the scale of a few hundred million parsecs.
Maybe 500 megaparsecs.
And so I bring you out now into the universe.
Here is the universe.
And these are galaxies.
And here is a sphere which has a radius R and that's on a scale of about 500 megaparsecs.
So rho, the average density, is representative for the universe.
And here, let's suppose you were here, or I can take any part in the universe, there's nothing special about it, and you see here a galaxy and that galaxy moves away from you with a velocity V.
That galaxy has a mass little m.
The mass inside here, capital M, inside this sphere, is 4/3 pi R cubed times rho.
It's the average density, right?
Now we know from Newton that the force that this galaxy will experience is only determined by the mass inside this sphere and not by the mass outside the sphere.
And so if I want to calculate whether these two objects will forever move away from each other or whether they will fall back to each other then all I have to make sure that I make the total energy 0, the sum of the kinetic energy and the potential energy must be 0.
So one-half mV squared of this object, it must be m M G / R.
That is when the total energy is 0.
We will expand forever and ever and ever and it will never come back.
Little m cancels out.
Capital M, I can write 4/3 pi R cubed rho.
Here comes my G and here comes R.
Notice that the R cubed upstairs becomes R squared.
And so if I have an R squared here and I have a V squared here, remember that V / R, that is Hubble's constant.
Because R is D, it's the distance between us and the galaxy.
And so V squared / R squared is the Hubble constant as we measure it today, squared.
And so you'll find then from this simple result that rho as it should be today, that's why I put a little 0 there, is 3 / 8 pi -- I get a G there, and I get H0 squared.
And so this tells me that if the density, the average density of our universe, is larger than this value, then our universe will come to a halt and will collapse.
And we can calculate that value.
Because we know H0, we think we know, we know G, and so you will find then -- I'll write it down here, that rho 0 is about 10 to the -26 kilograms per cubic meter.
And so if rho is smaller than this amount then we will continue to expand forever, the universe would be open.
If the mean density right now is larger than that amount, then we will -- the expansion will come to a halt, redshift will become blueshifts, and we will collapse again.
The matter here, this matter density, doesn't have to be tomatoes or potatoes.
It could be electromagnetic radiation.
Because according to Einstein E = MC squared.
So any form of energy represents mass.
So don't think of it necessarily as this being the stars and galaxies and tomatoes.
It is generally believed today that the expansion of our universe will not come to a halt and collapse.
But our views could change.
Enormous development has been going on in the last 10 years and you can read about that in the New York Times.
Almost every month you will read something about the enormous progress that's being made in cosmology.
And of course the idea of whether or not the universe will expand forever, whether it's open or whether it is closed, is something that's emotionally an important issue for us.
If the universe is open and it will expand forever, then stars will all burn out and the universe will become a cold, dead and boring place.
If however the universe is closed, the expansion will come to a halt, it will collapse, and it will end up with what we call the Big Crunch as opposed to the Big Bang.
And it will be hot, there will be fireworks, it will be like the early days of the Big Bang.
Temperatures of billions of degrees.
I'd like to read a poem from Robert Frost which he wrote in 1920.
It's called Fire and Ice.
"Some say the world will end in fire, some say in ice.
From what I've tasted of desire, I hold with those who favor fire.
But if it had to perish twice, I think I know enough of hate to know that for destruction ice is also great and would suffice." There are many people who want our universe to be closed, probably for emotional reasons, maybe for religious reasons, maybe it's more static, maybe it's more reassuring, maybe it's more romantic.
I don't know.
But if it's open the end is not very spectacular.
T.S. Eliot wrote, "This is the way the world ends, not with a bang but a whimper." Now it is conceivable that the expansion of the universe will come to a halt and that the universe will ultimately collapse.
We will have a big crunch.
And it is even conceivable that a new universe will then be born afterwards.
That there will be a new Big Bang.
And if the evolution of that universe were a carbon copy, exact carbon copy of the present universe, a few thousand billion years from now we may have a great 8.02 reunion.
Same place, same time, same people, perhaps see you then. | http://ocw.mit.edu/courses/physics/8-02-electricity-and-magnetism-spring-2002/video-lectures/lecture-35-doppler-effect-and-the-big-bang/ | 13 |
93 | A black hole is a theoretical region of space in which the gravitational field is so powerful that nothing, not even electromagnetic radiation (e.g. visible light), can escape its pull after having fallen past its event horizon. The term derives from the fact that the absorption of visible light renders the hole's interior invisible, and indistinguishable from the black space around it.
Despite its interior being invisible, a black hole may reveal its presence through an interaction with matter that lies in orbit outside its event horizon. For example, a black hole may be perceived by tracking the movement of a group of stars that orbit its center. Alternatively, one may observe gas (from a nearby star, for instance) that has been drawn into the black hole. The gas spirals inward, heating up to very high temperatures and emitting large amounts of radiation that can be detected from earthbound and earth-orbiting telescopes. Such observations have resulted in the general scientific consensus that—barring a breakdown in our understanding of nature—black holes do exist in our universe.
The idea of an object with gravity strong enough to prevent light from escaping was proposed in 1783 by John Michell, an amateur British astronomer. In 1795, Pierre-Simon Laplace, a French physicist independently came to the same conclusion. Black holes, as currently understood, are described by the general theory of relativity. This theory predicts that when a large enough amount of mass is present in a sufficiently small region of space, all paths through space are warped inwards towards the center of the volume, preventing all matter and radiation within it from escaping.
While general relativity describes a black hole as a region of empty space with a point-like singularity at the center and an event horizon at the outer edge, the description changes when the effects of quantum mechanics are taken into account. Research on this subject indicates that, rather than holding captured matter forever, black holes may slowly leak a form of thermal energy called Hawking radiation and may well have a finite life. However, the final, correct description of black holes, requiring a theory of quantum gravity, is unknown.
Simulated view of a black hole in front of the Milky Way. The hole has 10 solar masses and is viewed from a distance of 600 km.
According to Einstein’s general theory of relativity, as mass is added to a degenerate star a sudden collapse will take place and the intense gravitational field of the star will close in on itself. Such a star then forms a "black hole" in the universe.
The phrase had already entered the language years earlier as the Black Hole of Calcutta incident of 1756 in which 146 Europeans were locked up overnight in punishment cell of barracks at Fort William by Siraj ud-Daulah, and all but 23 perished.
Far away from the black hole a particle can move in any direction. It is only restricted by the speed of light.
Closer to the black hole spacetime starts to deform. There are more paths going towards the black hole than paths moving away.
Inside of the event horizon all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape.
Two concepts introduced by Albert Einstein are needed to explain the phenomenon. The first is that time and space are not two independent concepts, but are interrelated forming a single continuum, spacetime. This continuum has some special properties. An object is not free to move around spacetime at will; it must always move forward in time and cannot change its position in space faster than the speed of light. This is the main result of the theory of special relativity.
The second concept is the base of general relativity; mass deforms the structure of this spacetime. The effect of a mass on spacetime can informally be described as tilting the direction of time towards the mass. As a result, objects tend to move towards masses. This is experienced as gravity. This tilting effect becomes more pronounced as the distance to the mass becomes smaller. At some point close to the mass, the tilting becomes so strong that all the possible paths an object can take lead towards the mass. This implies that any object that crosses this point can no longer get further away from the mass, not even using powered flight. This point is called the event horizon.
The "No Hair" theorem does make some assumptions about the nature of our universe and the matter it contains. Other assumptions would lead to different conclusions. For example, if nature allows magnetic monopoles to exist—which appears to be theoretically possible, but has never been observed—then it should also be possible for a black hole to have a magnetic charge. If the universe has more than four dimensions (as string theories, a controversial but apparently possible class of theories, would require), or has a global anti-de Sitter structure, the theorem could fail completely, allowing many sorts of "hair". However, in our apparently four-dimensional, very nearly flat universe, the theorem should hold.
More general black hole solutions were discovered later in the 20th century. The Reissner-Nordström solution describes a black hole with electric charge, while the Kerr solution yields a rotating black hole. The most general known stationary black hole solution is the Kerr-Newman metric having both charge and angular momentum. All these general solutions share the property that they converge to the Schwarzschild solution at distances that are large compared to the ratio of charge and angular momentum to mass (in natural units).
While the mass of a black hole can take any (positive) value, the other two properties, charge and angular momentum, are constrained by the mass. In natural units , the total charge Q and the total angular momentum J are expected to satisfy Q2+(J/M)2 ≤ M2 for a black hole of mass M. Black holes saturating this inequality are called extremal. Solutions of Einstein's equation violating the inequality do exist, but do not have a horizon. These solutions have naked singularities and are thus deemed unphysical. The cosmic censorship hypothesis states that it is impossible for such singularities to form in due to gravitational collapse of generic realistic matter. This is supported by numerical simulations.
Black holes forming from the collapse of stars are expected—due to the relatively large strength of electromagnetic force—to retain the nearly neutral charge of the star. Rotation, however, is expected to be a common feature of compact objects, and the black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value.
|Supermassive black hole||~105 - 109 MSun||~0.001 - 10 AU|
|Intermediate-mass black hole||~103 MSun||~103 km = REarth|
|Stellar-mass black holes||~10 MSun||~30 km|
|Primordial black hole||up to ~MMoon||up to ~0.1 mm|
Outside of the event horizon, the gravitational field is identical to the field produced by any other spherically symmetric object of the same mass. The popular conception of black holes as "sucking" things in is false: objects can maintain an orbit around black holes indefinitely, provided they stay outside the photon sphere (described below), and also ignoring the effects of gravitational radiation, which causes orbiting objects to lose energy, similar to the effect of electromagnetic radiation.
The singularity in a non-rotating black hole is a point, in other words it has zero length, width, and height. The singularity of a rotating black hole is smeared out to form a ring shape lying in the plane of rotation. The ring still has no thickness and hence no volume.
The appearance of singularities in general relativity is commonly perceived as signaling the breakdown of the theory. This breakdown is not unexpected, as it occurs in a situation where quantum mechanical effects should become important, since densities are high and particle interactions should thus play a role. Unfortunately, to date it has not been possible to combine quantum and gravitation effects in a single theory. It is however quite generally expected that a theory of quantum gravity will feature black holes without singularities.
Note, however, that formation of the singularity takes finite (and very short) time only from the point of view of an observer which resides in collapsing object. From the point of view of distant observer, it takes infinite time to do so due to gravitational time dilation.
While light can still escape from inside the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light reaching an outside observer from inside the photon sphere must have been emitted by objects inside the photon sphere but still outside of the event horizon.
Other compact objects, such as neutron stars, can also have photon spheres. This follows from the fact gravitation field of an object does not depend on its actual size, hence any object that is smaller than 1.5 times the Schwarzschild radius corresponding to its mass will in fact have a photon sphere.
Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole this effect becomes so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.
The ergosphere of black hole is bounded by
Within the ergosphere, space-time is dragged around faster than light—general relativity forbids material objects to travel faster than light (so does special relativity), but allows regions of space-time to move faster than light relative to other regions of space-time.
Objects and radiation (including light) can stay in orbit within the ergosphere without falling to the center. But they cannot hover (remain stationary, as seen by an external observer), because that would require them to move backwards faster than light relative to their own regions of space-time, which are moving faster than light relative to an external observer.
Objects and radiation can also escape from the ergosphere. In fact the Penrose process predicts that objects will sometimes fly out of the ergosphere, obtaining the energy for this by "stealing" some of the black hole's rotational energy. If a large total mass of objects escapes in this way, the black hole will spin more slowly and may even stop spinning eventually.
The temperature of the emitted black body spectrum is proportional to the surface gravity of the black hole. For a Schwarzschild black hole this is inversely proportional to the mass. Consequently, large black holes are very cold and emit very little radiation. A stellar black hole of 10 solar masses, for example, would have a Hawking temperature of several nanokelvin, much less than the 2.7K produced by the Cosmic Microwave Background. Micro black holes on the other hand could be quite bright producing high energy gamma rays.
Due to low Hawking temperature of stellar black holes, Hawking radiation has never been observed at any of the black hole candidates.
The strength of the tidal force of a black hole depends on how gravitational attraction changes with distance, rather than on the absolute force being felt. This means that small black holes cause spaghettification while infalling objects are still outside their event horizons, whereas objects falling into large, supermassive black holes may not be deformed or otherwise feel excessively large forces before passing the event horizon.
From the viewpoint of a distant observer, an object falling into a black hole appears to slow down, approaching but never quite reaching the event horizon: and it appears to become redder and dimmer, because of the extreme gravitational red shift caused by the gravity of the black hole. Eventually, the falling object becomes so dim that it can no longer be seen, at a point just before it reaches the event horizon. All of this is a consequence of time dilation: the object's movement is one of the processes that appear to run slower and slower, and the time dilation effect is more significant than the acceleration due to gravity; the frequency of light from the object appears to decrease, making it look redder, because the light appears to complete fewer cycles per "tick" of the observer's clock; lower-frequency light has less energy and therefore appears dimmer, as well as redder.
From the viewpoint of the falling object, distant objects generally appear blue-shifted due to the gravitational field of the black hole. This effect may be partly (or even entirely) negated by the red shift caused by the velocity of the infalling object with respect to the object in the distance.
The amount of proper time a faller experiences below the event horizon depends upon where they started from rest, with the maximum being for someone who starts from rest at the event horizon. A paper in 2007 examined the effect of firing a rocket pack within the black hole, showing that this can only reduce the proper time of a person who starts from rest at the event horizon. However, for anyone else, a judicious burst of the rocket can extend the lifetime of the faller, but overdoing it will again reduce the proper time experienced. However, this cannot prevent the inevitable collision with the central singularity.
Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature, or because a star which would have been stable receives a lot of extra matter in a way which does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight (the ideal gas law explains the connection between pressure, temperature, and volume).
The collapse may be stopped by the degeneracy pressure of the star's constituents, condensing the matter in an exotic denser state. The result is one of the various types of compact star. Which type of compact star is formed depends on the mass of the remnant - the matter left over after changes triggered by the collapse (such as supernova or pulsations leading to a planetary nebula) have blown away the outer layers. Note that this can be substantially less than the original star - remnants exceeding 5 solar masses are produced by stars which were over 20 solar masses before the collapse.
If the mass of the remnant of exceeds ~3-4 solar masses (the Tolman-Oppenheimer-Volkoff limit)—either because the original star was very heavy or because the remnant collected additional mass through accretion of matter)—even the degeneracy pressure of neutrons is insufficient to stop the collapse. After this no known mechanism (except maybe the quark degeneracy pressure, see quark star) is powerful enough to stop the collapse and the object will inevitably collapse to a black hole.
This gravitational collapse of heavy stars is assumed to be responsible for the formation of most (if not all) stellar mass black holes.
Theoretically this bound is expect to lie around the Planck mass (~1019 GeV/c2), where quantum effects are expected to make the theory of general relativity break down completely. This would put the creation of black holes firmly out of reach of any high energy process occurring on or near the Earth. Certain developments in quantum gravity however suggest that this bound could be much lower. Some braneworld scenarios for example put the Planck mass much lower, may be even as low as 1 TeV. This would make it possible for micro black holes to be created in the high energy collisions occurring when cosmic rays hit the earth's atmosphere, or even maybe in the new Large Hadron Collider at CERN. These theories are however very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists.
Much larger contributions can be obtained when a black hole merges with other stars or compact objects. The supermassive black holes suspected in the center of most galaxies are expected to have formed from the coagulation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes.
A stellar black hole of 5 solar masses has a Hawking temperature of about 12 nanoKelvin. This is far less than the 2.7 K produced by the Cosmic microwave background. Stellar mass (and larger) black holes thus receive more mass from the CMB than they emit through Hawking radiation and will thus grow instead of shrink. In order to have a Hawking temperature larger than 2.7 K (and thus be able to evaporate) a black hole needs to be lighter than the Moon (and thus have diameter of less than a tenth of a millimeter).
On the other hand if a black hole is very small the radiation effects are expected to become very strong. Even a black hole that is heavy compared to a human would evaporate in an instant. A black hole the weight of a car (~ 10-24 m) would only take a nanosecond to evaporate, during which time it would briefly have a luminosity more than 200 times that of the sun. Lighter black holes are expected to evaporate even faster, for example a black hole of mass 1 TeV/c2 would take less than 10-88 seconds to evaporate completely. Of course, for such a small black hole quantum gravitation effects are expected to play an important role and could even —although current developments in quantum gravity do not indicate so— hypothetically make such a small black hole stable.
Most accretion disks and gas jets are not clear proof that a stellar-mass black hole is present, because other massive, ultra-dense objects such as neutron stars and white dwarfs cause accretion disks and gas jets to form and to behave in the same ways as those around black holes. But they can often help by telling astronomers where it might be worth looking for a black hole.
On the other hand, extremely large accretion disks and gas jets may be good evidence for the presence of supermassive black holes, because as far as we know any mass large enough to power these phenomena must be a black hole.
Steady X-ray and gamma ray emissions also do not prove that a black hole is present, but can tell astronomers where it might be worth looking for one - and they have the advantage that they pass fairly easily through nebulae and gas clouds.
But strong, irregular emissions of X-rays, gamma rays and other electromagnetic radiation can help to prove that a massive, ultra-dense object is not a black hole, so that "black hole hunters" can move on to some other object. Neutron stars and other very dense stars have surfaces, and matter colliding with the surface at a high percentage of the speed of light will produce intense flares of radiation at irregular intervals. Black holes have no material surface, so the absence of irregular flares around a massive, ultra-dense object suggests that there is a good chance of finding a black hole there.
Intense but one-time gamma ray bursts (GRBs) may signal the birth of "new" black holes, because astrophysicists think that GRBs are caused either by the gravitational collapse of giant stars or by collisions between neutron stars, and both types of event involve sufficient mass and pressure to produce black holes. But it appears that a collision between a neutron star and a black hole can also cause a GRB, so a GRB is not proof that a "new" black hole has been formed. All known GRBs come from outside our own galaxy, and most come from billions of light years away so the black holes associated with them are actually billions of years old.
Quasars are thought to be the accretion disks of supermassive black holes, since no other known object is powerful enough to produce such strong emissions. Quasars produce strong emission across the electromagnetic spectrum, including UV, X-rays and gamma-rays and are visible at tremendous distances due to their high luminosity. Between 5 and 25% of quasars are "radio loud," so called because of their powerful radio emission.
A gravitational lens is formed when the light from a very distant, bright source (such as a quasar) is "bent" around a massive object (such as a black hole) between the source object and the observer. The process is known as gravitational lensing, and is one of the predictions of the general theory of relativity. According to this theory, mass "warps" space-time to create gravitational fields and therefore bend light as a result.
A source image behind the lens may appear as multiple images to the observer. In cases where the source, massive lensing object, and the observer lie in a straight line, the source will appear as a ring behind the massive object.
Gravitational lensing can be caused by objects other than black holes, because any very strong gravitational field will bend light rays. Some of these multiple-image effects are probably produced by distant galaxies.
Unfortunately, since the time of Johannes Kepler, astronomers have had to deal with the complications of real astronomy:
According to the American Astronomical Society, every large galaxy has a supermassive black hole at its center. The black hole’s mass is proportional to the mass of the host galaxy, suggesting that the two are linked very closely. The Hubble Space Telescope and ground-based telescopes in Hawaii were used in a large survey of galaxies.
For decades, astronomers have used the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. However, theoretical and observational studies have shown that the active galactic nuclei (AGN) in these galaxies may contain supermassive black holes. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of gas and dust called an accretion disk; and two jets that are perpendicular to the accretion disk.
Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, and the Sombrero Galaxy.
In November 2004 a team of astronomers reported the discovery of the first well-confirmed intermediate-mass black hole in our Galaxy, orbiting three light-years from Sagittarius A*. This black hole of 1,300 solar masses is within a cluster of seven stars, possibly the remnant of a massive star cluster that has been stripped down by the Galactic Centre. This observation may add support to the idea that supermassive black holes grow by absorbing nearby smaller black holes and stars.
In January 2007, researchers at the University of Southampton in the United Kingdom reported finding a black hole, possibly of about 10 solar masses, in a globular cluster associated with a galaxy named NGC 4472, some 55 million light-years away.
Our Milky Way galaxy contains several probable stellar-mass black holes which are closer to us than the supermassive black hole in the Sagittarius A* region. These candidates are all members of X-ray binary systems in which the denser object draws matter from its partner via an accretion disk. The probable black holes in these pairs range from three to more than a dozen solar masses. The most distant stellar-mass black hole ever observed is a member of a binary system located in the Messier 33 galaxy.
In theory there is no smallest size for a black hole. Once created, it has the properties of a black hole. Stephen Hawking theorized that primordial black holes could evaporate and become even tinier, i.e. micro black holes. Searches for evaporating primordial black holes are proposed for the GLAST satellite to be launched in 2008. However, if micro black holes can be created by other means, such as by cosmic ray impacts or in colliders, that does not imply that they must evaporate.
The formation of black hole analogs on Earth in particle accelerators has been reported. These black hole analogs are not the same as gravitational black holes, but they are vital testing grounds for quantum theories of gravity.
They act like black holes because of the correspondence between the theory of the strong nuclear force, which has nothing to do with gravity, and the quantum theory of gravity. They are similar because both are described by string theory. So the formation and disintegration of a fireball in quark gluon plasma can be interpreted in black hole language. The fireball at the Relativistic Heavy Ion Collider [RHIC] is a phenomenon which is closely analogous to a black hole, and many of its physical properties can be correctly predicted using this analogy. The fireball, however, is not a gravitational object. It is presently unknown whether the much more energetic Large Hadron Collider [LHC] would be capable of producing the speculative large extra dimension micro black hole, as many theorists have suggested.
If the semi-diameter of a sphere of the same density as the Sun were to exceed that of the Sun in the proportion of 500 to 1, a body falling from an infinite height towards it would have acquired at its surface greater velocity than that of light, and consequently supposing light to be attracted by the same force in proportion to its vis inertiae, with other bodies, all light emitted from such a body would be made to return towards it by its own proper gravity.
This assumes that light is influenced by gravity in the same way as massive objects.
In 1796, the mathematician Pierre-Simon Laplace promoted the same idea in the first and second editions of his book Exposition du système du Monde (it was removed from later editions).
The idea of black holes was largely ignored in the nineteenth century, since light was then thought to be a massless wave and therefore not influenced by gravity. Unlike a modern black hole, the object behind the horizon is assumed to be stable against collapse.
In 1930, the astrophysicist Subrahmanyan Chandrasekhar argued that, according to special relativity, a non-rotating body above 1.44 solar masses (the Chandrasekhar limit), would collapse since there was nothing known at that time could stop it from doing so. His arguments were opposed by Arthur Eddington, who believed that something would inevitably stop the collapse. Eddington was partly right: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star. But in 1939, Robert Oppenheimer published papers (with various co-authors) which predicted that stars above about three solar masses (the Tolman-Oppenheimer-Volkoff limit) would collapse into black holes for the reasons presented by Chandrasekhar.
Oppenheimer and his co-authors used Schwarzschild's system of coordinates (the only coordinates available in 1939), which produced mathematical singularities at the Schwarzschild radius, in other words the equations broke down at the Schwarzschild radius because some of the terms were infinite. This was interpreted as indicating that the Schwarzschild radius was the boundary of a "bubble" in which time "stopped". For a few years the collapsed stars were known as "frozen stars" because the calculations indicated that an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it inside the Schwarzschild radius. But many physicists could not accept the idea of time standing still inside the Schwarzschild radius, and there was little interest in the subject for over 20 years.
In 1958 David Finkelstein broke the deadlock over "stopped time" and introduced the concept of the event horizon by presenting the Eddington-Finkelstein coordinates, which enabled him to show that "The Schwarzschild surface r = 2 m is not a singularity but acts as a perfect unidirectional membrane: causal influences can cross it but only in one direction". Note that at this stage all theories, including Finkelstein's, covered only non-rotating, uncharged black holes.
In 1963 Roy Kerr extended Finkelstein's analysis by presenting the Kerr metric (coordinates) and showing how this made it possible to predict the properties of rotating black holes. In addition to its theoretical interest, Kerr's work made black holes more believable for astronomers, since black holes are formed from stars and all known stars rotate.
In 1967 astronomers discovered pulsars, and within a few years could show that the known pulsars were rapidly rotating neutron stars. Until that time, neutron stars were also regarded as just theoretical curiosities. So the discovery of pulsars awakened interest in all types of ultra-dense objects that might be formed by gravitational collapse.
In 1970, Stephen Hawking and Roger Penrose proved that black holes are a feature of all solutions to Einstein's equations of gravity, not just of Schwarzschild's, and therefore black holes cannot be avoided in some collapsing objects.
In 1971, Louise Webster and Paul Murdin, at the Royal Greenwich Observatory, and Charles Thomas Bolton, working independently at the University of Toronto's David Dunlap Observatory, observed HDE 226868 wobble, as if orbiting around an invisible but massive companion. Further analysis led to the declaration that the companion, Cygnus X-1, was in fact a black hole.
In March 2005, physicist George Chapline at the Lawrence Livermore National Laboratory in California proposed that black holes do not exist, and that objects currently thought to be black holes are actually dark-energy stars. He draws this conclusion from some quantum mechanical analyses. Although his proposal currently has little support in the physics community, it was widely reported by the media. A similar theory about the non-existence of black holes was later developed by a group of physicists at Case Western Reserve University in June 2007.
Among the alternate models are magnetospheric eternally collapsing objects, clusters of elementary particles (e.g., boson stars), fermion balls, self-gravitating, degenerate heavy neutrinos and even clusters of very low mass (~0.04 solar mass) black holes.
The Hawking radiation reflects a characteristic temperature of the black hole, which can be calculated from its entropy. The more its temperature falls, the more massive a black hole becomes: the more energy a black hole absorbs, the colder it gets. A black hole with roughly the mass of the planet Mercury would have a temperature in equilibrium with the cosmic microwave background radiation (about 2.73 K). More massive than this, a black hole will be colder than the background radiation, and it will gain energy from the background faster than it gives energy up through Hawking radiation, becoming even colder still. However, for a less massive black hole the effect implies that the mass of the black hole will slowly evaporate with time, with the black hole becoming hotter and hotter as it does so. Although these effects are negligible for black holes massive enough to have been formed astronomically, they would rapidly become significant for hypothetical smaller black holes, where quantum-mechanical effects dominate. Indeed, small black holes are predicted to undergo runaway evaporation and eventually vanish in a burst of radiation.
Although general relativity can be used to perform a semi-classical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system which have the same macroscopic qualities(such as mass, charge, pressure, etc.). But without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some promise has been shown by string theory, however. There one posits that the microscopic degrees of freedom of the black hole are D-branes. By counting the states of D-branes with given charges and energy, the entropy for certain supersymmetric black holes has been reproduced. Extending the region of validity of these calculations is an ongoing area of research.
Black holes, however, might violate this rule. The position under classical general relativity is subtle but straightforward: because of the classical no hair theorem, we can never determine what went into the black hole. However, as seen from the outside, information is never actually destroyed, as matter falling into the black hole takes an infinite time to reach the event horizon.
Ideas about quantum gravity, on the other hand, suggest that there can only be a limited finite entropy (i.e. a maximum finite amount of information) associated with the space near the horizon; but the change in the entropy of the horizon plus the entropy of the Hawking radiation is always sufficient to take up all of the entropy of matter and energy falling into the black hole.
Many physicists are concerned however that this is still not sufficiently well understood. In particular, at a quantum level, is the quantum state of the Hawking radiation uniquely determined by the history of what has fallen into the black hole; and is the history of what has fallen into the black hole uniquely determined by the quantum state of the black hole and the radiation? This is what determinism, and unitarity, would require.
For a long time Stephen Hawking had opposed such ideas, holding to his original 1975 position that the Hawking radiation is entirely thermal and therefore entirely random, containing none of the information held in material the hole has swallowed in the past; this information he reasoned had been lost. However, on 21 July 2004 he presented a new argument, reversing his previous position. On this new calculation, the entropy (and hence information) associated with the black hole escapes in the Hawking radiation itself. However, making sense of it, even in principle, is difficult until the black hole completes its evaporation. Until then it is impossible to relate in a 1:1 way the information in the Hawking radiation (embodied in its detailed internal correlations) to the initial state of the system. Once the black hole evaporates completely, such identification can be made, and unitarity is preserved.
By the time Hawking completed his calculation, it was already very clear from the AdS/CFT correspondence that black holes decay in a unitary way. This is because the fireballs in gauge theories, which are analogous to Hawking radiation, are unquestionably unitary. Hawking's new calculation has not been evaluated by the specialist scientific community, because the methods he uses are unfamiliar and of dubious consistency; but Hawking himself found it sufficiently convincing to pay out on a bet he had made in 1997 with Caltech physicist John Preskill, to considerable media interest.
QUESTIONS AND ANSWERS ON FDA'S DRAFT GUIDANCE ON THE JUDICIOUS USE OF MEDICALLY IMPORTANT ANTIMICROBIAL DRUGS IN FOOD-PRODUCING ANIMALS.
Jun 28, 2010; ROCKVILLE, MD -- The following information was released by the Center for Veterinary Medicine: What is the purpose of the draft...
Knowledge of the principles of judicious antibiotic use for upper respiratory infections: a survey of senior medical students.(Original Article)
Sep 01, 2005; Objective: Senior medical students (n = 2,433) from 21 accredited medical schools in New England and the mid-Atlantic states were... | http://www.reference.com/browse/judicious | 13 |
56 | EARTH'S BLEAK FUTURE
© Copyright: 6 Oct. 02 & 6 Jan. 06 & 22 Jul. 08 & 5
Feb. 09 & 17 Nov. 09 & 23 Dec. 09 & 5 May 10 & 27 Dec. 12
Back To Home Page
Our Sun's half Jovian sized dark star companion generates five comet swarms that somewhat periodically pass through the inner solar system. Earth appears threatened by multiple (5 - 7) major impacts at relatively specific periods between 2009 AD and 2160 AD. Earth's surface may be reformed by a either one or a series of impacts circa 3823 AD.
- IS THERE A SIGNIFICANT IMPACT THREAT?
- HAS EARTH BEEN HIT OFTEN?
- DOES VULCAN EXIST AND HAS ITS ORBIT BEEN VALIDATED?
- DOES ANYONE IMPORTANT THINK THIS IS A PROBLEM?
- WHEN COULD WE BE HIT NEXT?
- ARE THERE ANY CREDIBLE PRECISE PREDICTIONS?
- COULD THEY BE CORRECT?
1. IS THERE A SIGNIFICANT IMPACT THREAT?
A theory has been proposed that accurately predicts the orbits of our solar planets as well as the orbits of planets (and tiny stars) in nearby star systems. This theory postulates that our solar system was formed with the aid of a solar companion star named Vulcan by the ancients. Vulcan is estimated to orbit the Sun with a period slightly less than 5000 years. Vulcan draws comets from the Kupier belt and they occasionally pass through the Earth's orbit with disastrous consequences (e.g. causing Noah's flood). A dark body (that could be Vulcan) has been detected by the IRAS satellite. The IRAS measurements and data from other sources have been utilized to determine the orbit and mass of Vulcan.
2. HAS EARTH BEEN HIT OFTEN?
Comet impacts cause times of great cold, even Ice Ages, because they stir up so much dust. Evidence can be found in "tree ring", "ice core" and other data. Geochronology data shown in the following table indicate that Earth has been hit often recently. Five comet swarms have formed at two distant (444 AU) locations separated by 13o in Vulcan's elliptical orbit. All are formed in a 3:2 resonance orbits during two sequential Vulcan orbital revolutions. These are similar to the red Plutinos found in a 3:2 resonance orbit with Neptune. These Vulcan related swarms are labeled A' & A and B' & B respectively. A rogue swarm C (same period anticipated) is also present. The separation of the A' to A and B to B swarms is found to be about 666 - 790 years. Multiple comet clusters (Cl) within each swarm exist. Images of these clusters have been found carved in solid rock.
Swarm A, A'|
B, B' or C
|2119/2154 ADb||B: Cl - 1c||-||-||Many Clusters - 2006-2130 ADd|
289 YA, 320 YA|
|237 - 278||-||Small Strikes 1680 - 1700 AD?|
Mahuika Crater, New Zealand tsunami 1422 or 43
1120 +/-5 YA
|238||-||Water, China Strike|
Tree Ring Data 880 +/- 5 AD
|1464 YA||A:Cl-2j||300||-||Dark Ages Start Two-stage event after Comet of 531 AD Exploded Near The Sun.|
Volcano or Comet dust
|2044 YA||Comet dust||-||-||Caesarís Comet|
|223||-||Volcano or Comet dust|
Volcano or Comet dust
|2785 to 2838 YA|
|374 - 321|| 843 - 790|
|Global Climatic Boundary|
Exodus?/Bronze Age Collapse
|3370 YA||Volcanic||-||-||Volcanic dust|
|3582||?c||-||-||Joshua Impact Event 1582 BC?|
|271||-||Comet - Deucalion Flood |
Comet - ?
|4020 YA||Volcanic||-||-||Volcanic dust|
4344 - 54 YA
|149||-||Sodom/Gomorrah sulfur found - 2195 BC?
Global Comet Event
|4772 YA||Volcanicj||-||-||Volcanic dust|
5195 5201 YA
|189 -195||~ 544?||S. Africa Strike |
|5550 +/-50 YA||A':Cl-2||-||-||Oldest Societies 3500 - 3600 BC|
|6370 YA||Bg||-||-||Tree Ring Data 4370 BC|
|7000 YA||B':Cl-1||-||-||Water Strike/Black Sea Flood|
|7600 YA||C:Cl-1||-||-||Cold Dry Period|
|170||-||Volcanism/Mini Ice Age|
|8550 YA||A':Cl-2||-||-||Global Climate Change|
|9350 YA||A': Cl-1||-||-||Global Climate Change|
|9797 - 9907 YA|
9946 +/- 20
|-||-||Strikes/Volcanism - Five dates from Copenhagen; 7797 BC, 7812 BC, 7878 BC, 7907 BC, 7946 BC|
|10350 +/- 50 YA||B':Cl-1||-||-||End Of Last Ice Age|
|10850 +/- ? YA||C:Cl-2||-||-||Global Climate Change|
|11320 +/- 30 YA|
11703 +/-20 YA
exact ice core data. 9,703 BC Atlantis impact event core
|12220 +/- ? YA||A':Cl-?||-||-||Younger Dryas|
|501||-||exact ice core 12,679 BP|
13.18 kyr BP (Bolling/Allerod)
|14235 +/- ? YA|
|375?||-|| huge pulse of freshwater drained from continental ice sheets ice
Goughs Cave (Cheddar) radiocarbon
The Theoretical Strike Dates are listed first and are based on a 4,969-year Vulcan orbital period and a corresponding 3313-year comet swarm period. See the above table. The events denoted as "base" dates (YA = Years Ago from 2000 AD) and are considered the most accurate (impact times). These are used to project when the specified comet swarm has, or could again, pass near Earth. Two clusters may pass near Earth within less than a hundred years. Data supports all but one of the 19 theoretically predicted events. A stable (deflected?) rogue comet swarm C (3227 +/-24 year period) adds five more.
3. DOES VULCAN EXIST AND HAS ITS ORBIT BEEN VALIDATED?
Vulcan's orbital parameters are both theoretically computed and measured from the past comet impact data listed in the above table. Vulcan is estimated to orbit the Sun with a period slightly less than 5000 years. Its orbit appears to be very eccentric and highly inclined to the ecliptic plane. Vulcan pasted aphelion around 1971, about 448 AU from the Sun. Vulcan is believed to be black or red and dense like Earth or Mercury. It is somewhat bigger than Saturn being about 0.05% solar mass (141 +/- 35 Earth masses). An unusual theoretical analysis leads to Vulcan's orbital parameters as listed in the following table.
Science Digest reports that this brown dwarf was sought by the Pioneer space probes and there are some NEWSPAPER reports that it may have been found by the Infrared Astronomical Survey Satellite (IRAS).
|Parameter||Value||Max. Error||Min. 2 sigma Error||Forbes'(1880)|
|Period (years)||4969.0||+30.4/- 24.3||+/- 11.5||5000|
|Orbital Eccentricity||0.537||+0.088/-0.035||+/- 0.0085||not cal.|
|Orbital Inclination||48.44o||+3.12o/-9.05o||+/- 0.23o||45o|
|Longitude of the Ascending Node||189.0o||+/- 1.3o||+/- 1.3o||185o|
|Argument Of Perihelion||257.8o||+6.11o/-13.47o||+/- 0.90o||not cal.|
|Time of Aphelion (years)||1970 AD||+/- 1.0||+/- 1.0||not cal.|
Vulcan's orbital parameters are bounded by two error residual sets (whether or not an IRAS (Infrared Astronomical Survey) object IRAS 1732+239 is Vulcan. Giant comet CR105's period (3319.3 +/- 20 years) is in the predicted 3:2 resonance with it. Three other Kuiper Belt objects, 2001 FP185 and 2002 GB332 appear to be in a similar 3:2 resonance and 1999 DP8 in a 4:1 resonance with Vulcan.
The comet impact data testifies to Vulcan's existance and offers another method to estimate Vulcan's orbital parameters. Since the comet swarms form in 3:2 ratios with Vulcan's period, both can be independently measured from the impact data.
VULCAN'S PERIOD FROM COMET STRIKE DATES COMET SWARM PERIOD
ONE SIGMA YEARS
DETERMINATION METHOD 3312.7 +/- 7.7 4969.0 +/- 5.75 THEORETICAL-IRAS POINT 3314.7 +/- 18.3 4972.1 +/- 27.4 THEORETICAL-NOT IRAS POINT 3319.3 +/- 20 4979 +/- 30 BASED ON CR105'S PERIOD 3312.4 +/- 113 4969 +/- 170 FROM 49 POSSIBLE MEASUREMENTS 3317 +/- 104 4975 +/- 156 A, A', B & B' data -NO IRAS POINT 3313.3 (mean) 4969.5 +/- 10 EXACT INTERVAL 11,703 YA* TO 236 AD
The theoretical 489 year "short leg" (A' to A or B' to B impact data intervals (deduced from the orbital eccentricity) is measured to be 515 +/- 67 years.
*11,703 from 2000 AD
The comet swarms form into clusters, for example for the A swarm, A:Cl-2 and A:Cl-1, and the intervals between their perihelionís are not quite periodic because their orbits are influenced by Vulcan's gravitational forces. Modeled examples of these quasi-periodic intervals for A:CL-2 cluster (going backward in time from most recent perihelion Earth impacts) are 3111, 3417 and 3411 years. Thus there is a Short, than two Long intervals. Similarly for the A:Cl-1 cluster, these quasi-periodic intervals are 3237, 3233 and 3468 years. Thus there are two Short, than one Long interval. While these modeled intervals are subject to some errors, the resonance with the two Vulcan orbital period must be maintained. Thus the sum of these three intervals is anticipated to be about 9938 years (two Vulcan orbital periods).
The precise difference in these times reflect two Vulcan orbital periods. That difference, 236 AD to 11,703 years ago from 2000 AD, is exactly 9939 years. Thus Vulcanís orbital period must be 4969.5 +/- 10.0 years. Thus it is highly likely that the IRAS (Infrared Astronomical Survey) object IRAS 1732+239 is Vulcan and that the orbital parameters derived from it are correct.
The geo-climatological data tends to support the theoretical Vulcan orbit. The mean comet swarm orbit period from 38 measurements of clusters found in the A & A', B and B' swarms is 3332.6 +/- 118.85 years. Comet swarm modeling indicated 90 of the 118.85 years is due to gravitational perturbation of the comet's orbits by Vulcan. The Student T test's P-value is greater than 0.2 for the theoretical 3313 year period and 79% of these measurements are within the modeled boundaries of the based on a 0.05% solar mass Vulcan. The P-value is marginal (0.052) at the swarm period's theoretical lower limit, 3295 years. Vulcan's period 1.5 times greater due to the 3:2 resonance. Other restrictions limit Vulcan's period to 4999.4 years, more likely less than 4992 years. Thus, the geo- climatological data seems to suggest that Vulcan's period is between 4969 and 4992 years. Interestingly, Professor George Forbes arrived at a value of 5000 years for the period of an undetected planet in 1880.
Other data (possibly of extraterrestrial alien origin) also reveal the probable existence of Vulcan within our own solar system. These include:
- The Akkadian Seal shown below (circa 4500 years ago) and its analysis.
- The Famous Hill Star Map (circa Sept. 1961)
- Another Alien Star Map (circa Oct. 1974) presented in Figure 5A.
- The offset of the Sun from the centroid of Crop Circle T367 is oriented in the direction of the IRAS object suspected to be Vulcan.
A mathematical fit of the Sun and planets (Uranus, Neptune, Jupiter and Saturn) found on the seal show that it is illustrating planets of our solar system and includes a Vulcan of 141 +/-35 Earth masses.
Even the name given to our Sun's dark star companion (Vulcan) is specifically mentioned in the Bible Code, whose source is believed to be of alien origin. See the following list. Note that threatening impactors (incoming asteroids e.g. dead comets) are also mentioned.
- Dark Star dark star - star/planet - star - Sun - companion - object - invisible/concealed - concealed - emerging - amalgamation - granular/nuclear - turmoil
- Tsunami Vulcan - planet/star - India - Indian - Athens - Greece - tsunami - stony - flaming - gigantic - flame - obliterate - flabbergasted - leviathan - oblivion - overwhelming - impact
- Incoming Asteroid #1 incoming asteroid - Vulcan - blaze - brightness - whole earth - USA - impact - billion - missile - nemesis/vengeance - disaster/holocaust
Likewise, similar references are found in Vedic prophecies. See the following list.
- Our Sun has it's "dark companion". Although smaller than the Sun, this "dark companion" is more powerful and completely black. The dark companion revolves around the Sun in elliptic orbit.
- When the dark companion is between Sun and the center of galaxy (Vishnunabhi), we have Kali yuga on Earth. When, it moves away from the axis Sun-center of galaxy, the "Black Star" does not have a huge inhibiting influence, so the influence of knowledge is much greater.
- Science progress and thirst for knowledge can be astrologically explained by diminished "influence" of dark companion on the Sun. Distancing itself from the Vishnunabhi, it frees the path of light at the galactic center, and growth of knowledge occurs.
- Precession cycle gives birth to ages. Cycle of dark companion gives birth to shorter yugas. Combination of two gives birth to longer yugas. The main role in all of it is the one of center of galaxy situated at the beginning of sidereal constellation of Sagittarius.
A phenomena similar to human remote viewing is believed to be how the Bible Code and Vedic data were generated. remote viewing has produced other interesting results:
- "Planet X, the tenth planet in our solar system, will be verified in the year 2015. It will have an elliptical orbit with a mean distance from the Sun of 51.50 AU, and a density approximately equal to that of Mercury.
- Construction of the Great Pyramid: An aquatic method consistent with the way the Nommos, an amphibious species, may have suggested it be constructed.
- The origin of human life: Involvement of the Sun (pg. 96).
- Future Supersonic Aircraft Only Slightly Disturbs The Air: This technique is revealed by the Aerodynamic Augmentation Device - patents #5,791,599 and #5,797,563.
4. DOES ANYONE IMPORTANT THINK THIS IS A PROBLEM?
The existence of Vulcan implies that comet collisions with planets in the inner solar system are more frequent than commonly believed. Perhaps the Pleistocene (Ice Age) was initiated a few million years ago by just such an impact. A large comet from the Kuiper belt was drawn into an elliptical orbit by gravitational interaction with Vulcan. It entered our solar system and "crumbled" rounding the Sun. Fragments from this event may have formed the A comet swarm. The surface of Venus may have been reformed not hundreds of millions of years ago, but rather just a few million years ago by an impact from one of the larger (100 mile diameter class) fragments. An impact from a small comet of this A-swarm likely caused Noah's Great Flood. Earth is likewise at risk of a massive impact event. There may still be many more very dangerous impactors passing regularly through our inner solar system. This well known NASA illustration depicts such an event with a 320 mile diameter impactor striking Earth.
The Bible is full of warnings of comet or asteroid impacts, especially from Christ as found in the New Testament. Clear text Bible predictions speculating on impacts have been interpreted as follows:
"Starting at the end and work forwards, the last Cataclysmic event is when the Earth is renovated by fire. It is discussed in 2 Peter 3:5-13 and Revelation 20:7-15. The Earth is turned into a sea of fire and lava. There are no longer any oceans. This event could be cause by a large impactor 100- 200 miles in diameter.
Before that happens there will be a millennium, a thousand years, of peace. Revelation 20:1-6 - God made a promise to the Jewish people, during this time he will begin to fulfill it. Before that there is a Great Tribulation. This event is discussed in Matthew 24:21, Revelation 8:5-13, Revelation 6:12-17, Revelation 16:17-21 and Revelation 18:8-10,21 to name a few. The damage falls in line with the damage that would be created from an impactor in the 2-3 mile diameter range.
Across many cultures, dragons have been symbolic of comets. As a result, if the damage during the Great Tribulation, was done by an impactor, it is somewhat natural to believe that the impactor would be a comet."
The Biblical predictions are surprisingly similar to ancient Vedic prophecies. Two "ends of the Earth" are envisioned, the first survivable and the second impact vaporizing the Earth's waters and turning Earth into a Venus like planet. The First Event is described thusly:
- The destroyer god will breathe enormous clouds, which will make a terrible noise.
- A mass of clouds charged with energy, destroyer of all, will appear in the sky like a herd of elephants.
- When the moon is in the constellation of Pushya (Aquarius), invisible clouds called Pushkara (clouds of death) and Avarta (clouds without water) will cover the Earth.
- Immense clouds will darken the sky. Some of these clouds will be black, others white like jasmine, others bronzed, others gray like donkeys, others red, others blue like lapis or sapphire, others speckled, orangish, indigo. They will resemble towns or mountains. They will cover all the Earth. These immense clouds, making a terrible noise, will darken the sky and will shower the Earth in a rain of dust which will extinguish the terrible fire.
- Then, by means of an interminable downpour, they will flood the whole Earth with water. This torrential rain will swamp the Earth for twelve years, and humanity will be destroyed. The whole world will be in darkness. The flood will last seven years and the Earth will seems like an immense ocean.
- Seven humanities must again succeed each other on Earth, and when the Golden Age reappears, seven sages will emerge to again teach the divine law to the few survivors of the four castes.
- Those few humans who survive the holocaust will be the progenitors of the future humanity.
A Golden Age (similar to Revelation's thousand years) is cited to be available for the few survivors.
The Second Event apparently reforms the surface of the Earth. The impact appears to occur after the comet crumbles from gravitational interactions after rounding the Sun. Nostradamus's predictions end 3797 and this may signify the end of mankind's tenure on Earth. Should Earth's surface be totally destroyed, mankind's Oversoul would have to find a new planet and new biological life form into which to reincarnate. Then there would literally be a "New Heaven and New Earth" as 2 Peter 3:13 and Revelation 21 record. The Vedic prophecies predict that:
- This destruction will start with an underwater explosion will take place in the southern ocean.
- It will be proceeded by a hundred year drought during which the people who are not robust will perish. The seas, the rivers, the mountain streams, and the underground streams will be drained.
- Twelve suns will cause the seas to evaporate. Fed by this water, seven suns will form which will reduce the three worlds to ashes; the Earth will become hard like a turtle's shell.
Britain's Astronomer Royal Sir Martin Rees thinks "the odds are no better than 50-50 that our present civilization on Earth will survive until the end of this century." Among other things, he cites:
- Supervolcanoes (like the awakening Yellowstone caldera)
- Near-Earth objects (like passing asteroids and comets)
The recently deceased Royal Astronomer, Sir Fred Hoyle, has likewise warned us of such disasters. Specifically, he has stated that:
"Together with disease, the next ice age ranks as the biggest danger to which we as individuals are exposed. The next ice age is not a specific problem of the distant future. The causative agent, the strike of a giant meteorite, could happen at any time.
The risk of the next ice age is not just the biggest of the risks that we run. It is a risk that would hopelessly compromise the future. Besides wiping out a considerable fraction of those now alive, it would leave a wan, grey future from which the survivors and their descendants could do nothing to escape. It would be a condition that might last 50,000 years or more, a future in which the prospects for mankind would be much less favourable than they are today. This is why our modern generation must take action to avoid catastrophe, an ultimate catastrophe besides which the problems that concern people, media and governments from day to day are quite trivial."
Mother Shipton, Nostradamus, the Biblical Zechariah and a host of minor prophets have likewise forecast impending comet impacts.
5. WHEN COULD WE BE HIT NEXT?
The real question is when should we anticipate the next major comet swarm to pass by and threaten Earth? The following figure, taken from Tollmann's work, shows what it was like 10,000 years ago. Then, the B-swarm strike caused a world wide catastrophe. Seven cometary impacts hit the Earth around 7545 BC. The lack of "chains of craters" seems to eliminate the fragmentation of a single comet nucleus. This occurred while Vulcan was near aphelion as it is now. The Sun (and inner planets) are at a maximum distance from the barycenter of the Sun/Vulcan system.
The First Event occurs within the next 160 years or so and is associated with comet clusters from the B Comet Swarm. The climatic change data provided by the Center for Ice and Climate, Niels Bohr Institute at the University of Copenhagen offers precise data that suggests that there was at least five major weather changes about ten thousand years ago. While some of these events may be purely volcanic, this period seems to be generally related to the B Swarm Cluster 1 passage. B Swarm Cluster 2 passage seems to occur about 350 years later and seems spread out over 50 years or so. While some of these events may be purely volcanic, this period seems to be generally related to the B Swarm Cluster 1 passage. These B Swarm Cluster 1 events occurred at:
- 7946 BC (+/-21, one sigma)
- 7907 BC (+/-20, one sigma)
- 7878 BC (+/-20, one sigma)
- 7812 BC (+/-20, one sigma)
- 7797 BC (+/-20, one sigma)
The separation of these events is good to one or two years. Since the 3:2 orbit resonance with the comet swarms must be precise at each two Vulcan orbital periods, and since we know the its precise period to be 4969.0 +/- 5.75 years (one sigma) the passage of these B swarm clusters can be computed as the simple addition of 9938 years to the value about ten thousand years ago with an rss error of +/- 11.5 years and the one sigma listed above. It is not known if any of the above values relate to measurements of purely volcanic events where the addition of 9938 years would not apply. Assuming a 2017 passage:
- 7946 BC + 9938 = 1993 AD +/- 24 years ~ 2017
- 7907 BC + 9938 = 2032 AD +/- 23 years ~ 2055 +/- 4 years
- 7878 BC + 9938 = 2061 AD +/- 23 years ~ 2084 +/- 4 years
- 7812 BC + 9938 = 2127 AD +/- 23 years ~ 2150 +/- 4 years
- 7797 BC + 9938 = 2142 AD +/- 23 years ~ 2165 +/- 4 years
If these comet clusters remain potent weather changers, the odds are that that a significant event will occur before the close of 2017. The B-swarm appears to have been identified and is expected to pass between end of 2015 and the beginning of 2018. It will threaten Earth ~ 7 Sep. 2016 or 2017 +/- 6 days. Thus, the bias on the error values is 24 years. There still remains a +/- 3 year uncertainty in the intra- ice core data and a 2.3 year error from the size of the comet swarm leaving a net error of +/- 4 years
The Second Event apparently reforms the surface of the Earth. Multiple impactors, that are likely from the A:Cl-1 cluster that last passed through the inner solar system 1765 years ago, appear to impact after (crumbling) from rounding the Sun. The last observed period of this cluster was 3430 years implying a passage in 3823 AD. This is within 26 years of when Nostradamus's predictions end in 3797 AD.
6. ARE THERE ANY CREDIBLE PRECISE PREDICTIONS?
As astonishing as it may seem, it appears that extraterrestrial aliens have warned us of an impending comet impact threat beginning as early as 2007. This would seem incredulous were in not that Noah's Ark has been found and it appears to be of alien construction . Space-mobile alien predictions could be credible, as they would have knowledge of celestial mechanics. This, combined with similar predictions by "credible psychics", add a new and frightening dimension to possible "comet collision with Earth" scenarios. The Bible Code further substantiates this, and other subsequent impacts. The T367 Crop Circle image shown above is believed to suggest the range and date when the next threatening comet swarm first becomes visible. Many comets often just become visible at a distance comparable to the asteroid belt. Thus, these comets are anticipated to start threatening Earth either 95 or 80 + 95 = 175 days after 25 February 2007. The following dates of years on and after 2007 are indicated; 07/05, 08/11, 09/04(14) and 10/07 - all +/- 6 days. Reported 2007 activity included impacts on 6 July and 4 - 10 August and an unusual impact in Peru on 17 September. Intense fireball activity was observed on 11 - 13 August (from T367's #3 broken comet) and 13 and 16 September from outskirts of the constellation Camelopardalis - where a known Earth threatening comet (5 October Camelopardalis) with a nominally correct period - is known to radiate from. The 8 October 2009 Indonesian event may well be related to the 5 October Camelopardalis Earth threanening comet debris.
If the recent 15 July 2008 Avebury Manor crop circle is actually a valid one of alien construction, it would seem to indicate that an ALIEN PRESENCE think that the meteors/comets threatening Earth would do so around September 4 +/- 6 days and that this threat may be diminishing after December 2012. Thus, 2010, 2011 and especially 2012 remain as potential years during this current time period (1991 +/- 24 years) when an impact event may occur. Additional crop circle information was added to this formation on 22 July 2008, suggest an attempt to add disinformation to the 15 July formation. These comets may have been orbiting through the inner solar system for a million years, and they may no longer glow as they may have lost their volatile gases long ago.
Earth's bleak future may be the reason alien contact is being supressed. Only a few people were taken aboard Noah's Ark. Noah (AKA Gilgamesh) was an important "leader" of his day. Both current and ancient alien comments describe humans as a deceptive species.
The Bible Code likewise is believed to be of alien origin. It implies when the B-swarm's clusters may pass and where some of its comets may impact Earth. The Bible Code was investigated for impacts within the 2119 +/- 170 year window. Most of the predictions found are also double sourced to some degree by human "remote viewing". The next impact is anticipated to occur in Southern India in 2007. See the following table. A comet crumbled when passing the Sun may impact the Southern Hemisphere ~2012. A Mediterranean impact (prophesied by Shipton, et. al.) apparently occurs in the 2015-17 time frame. The Yellowstone Bible Code seems to suggest that Earth will be struck around 2044-2045 causing a (60 miles wide) crater, possibly in Canada, with a related impact in Ohio. A 3 mile (4.8 km ) diameter rocky object striking Earth at 42 kps would generate a magnitude 10.6 Earthquake (largest recorded Earthquake: 9.5) and an impact crater of diameter: 96.5 km, depth: 1.2 km like the 100 km crater of Manicouagan Lake in Quebec, Canada. The impact apparently stimulates an eruption of the Yellowstone super volcano. This eruption is anticipated to slowly inject dust into the atmosphere and causing a nuclear winter. The 2044 impactor is anticipated to be metallic, possibly mitigating Ice Age conditions. But remember, these are only predicted events within the projected impact window.
NEAR MISSES AND POSSIBLE IMPACT EVENTS SUMMARY Near Miss/Impact
Quakeb Tsunami Volcano Pole Shift Note Comet 1, 2006 - 10/11; T367
2007 or later; 07/05, 08/11, 09/04(14), 10/07 +/- 6 days
S. India Crater
(hill of Atlantic Ocean)
- - T367 2008/9?
E. USA triple ocean impact.
Ast. A, 2011?, 2012s
Ast. B 2013?
Ast. C, 2015/2016* Vulcan
list 4 #4 & #11a
Ast. D 2030/30* Axis Tilt, Pole Shift 2031** - - 2030 Stony Ast. E 2044/44* Wormwood Vulcan
Whole Earth, Chili, Japan, Burma, Hanoi
- - - Metal Comet 2, Axis Tilt 2045, 2044 Related Astronomy, Atomic War, Ohio, Canada Q2 2045 - 2045
- Rome, Israel
Ast. F 2071-71* Astronomy
Q2 Q3 2071-71** - - - Vietnam, N. Africa** Ast. G, 2130
- - - - Impact*
Note: s = Short Hebrew Calendar Bible Code Date; Asteroid, Blunderbuss and Vulcan Bible Code Terms.
a. Astro. Term: Ast. = Asteroid Or Non Glowing Comet; Blunderbuss = Multiple Fragment Impacts
b. The shock from impact takes about a year to re-converge at about the same place.
*Remote Viewing Near Earth Asteroid Pass, Impact 2120 - 2130. **"Remote Viewing" Earthquake;
***Joseph Noah' Book
Impact regions, found in Nostradamus' quatrains, have long been known to fall along straight lines.
- VI/6 Eretria - Greece - 38.2N 23.6E
- VI/6 Boeotia - Greece - 38.5N 23.5E
- VI/6 Siena - Tuscany Italy - 43.0N 11.3E
- VI/6 Susa - Italy/France Border - 45.0N 7.0E
- I/46 Auch, Lectoure, Mirande - 43.5N 0.5E
- V/98 Bearn, Bigorre - 43.5N 0.5E
Analysis of Nostradamus', Mother Shipton's, and the Biblical Zechariah's prophecies (predicted to occur at the start of this millennium, possibly 2015-17) lead to the following comet impact trajectory:
7. COULD THEY BE CORRECT?
Mankind is a Warrior EILF (Ensouled Intelligent Life Form), one inclined to take action rather than think about things. We love faith, even though it is seldom based on reason. So it seems that warnings of an impending comet (or asteroid) catastrophe are geared to awaken the "faithful" as well as the intellectual or scientific. For these two groups of human beings, we have two kinds of warnings.
FOR THE FAITHFUL
Both Christian (also the aforementioned warnings of), Vedic predictions and even Zoroastrian scriptures indicate that all life on Earth is likely to be annihilated by impact(s) of one or more massive meteorites. Analysis found elsewhere indicates that the surface of Venus has already been reformed by such an impact, one probably occurring only a few million years ago. Impacts from (swarm A: Cluster 1's) comets are prophesied to impact Earth with such violence that its surface is reformed. This is expected around 3797 AD (Based on Nostaradamus' prophecies) to 3823 AD (based on comet swarm period data). But before that happens, A series of impacts, beginning around 2006 and lasting for 120 years, may well throw Earth back into another Ice Age. Specific dates for these events are extracted from a religious source, the Bible Code. Even the name 'Vulcan' is found associated with these impact events. These religious warnings are intended to provide the 'faithful' a reason to believe that mankind is in danger of impacts events. Little logic is required, only faith.
FOR THE SCIENTIFIC
Partially because we are a Warrior species (and tend to 'follow our deceitful leaders') and partially due to human stubbornness towards accepting new ideas and concepts, a 'slow awakening' to the astronomical dangers we face can be anticipated. But a few well known scientists have had the courage to warn us, among them the British Royal Astronomers Sir Fred Hoyle and Sir Martin Rees.
A Dark Star (scientifically known as a Brown Dwarf), if in an elliptical orbit such as the one found for Vulcan, would automatically draw in Kuiper Belt objects as their rotational motion would be canceled by such Dark Star's gravity. These would fall towards the Sun and crumble while rounding it thereby generating comet swarms. This is just simple astronomy. The incoming comet swarms could not be seen until they are practically upon us. Even a small comet colliding with Earth could be catastrophic, eventually killing billions by starvation through weather changes.
Could an impact catastrophe occur this century? This site estimates that the threat is 15% believable. Further, the threatening B swarm has generated detectable weather changes in the four times it has passed by in the last thirteen thousand years. Rees' 50-50 odds for our civilization's survivability seem reasonable. Would our leaders cover up detection of the Brown Dwarf Star, or verification of past events (involving Noah's Ark)? They have before.
Vulcan's orbital parameters have changed only a few percent over the years that this work has been on the Web, but its mass has dropped precipitately (from about 15,600 to about 150 Earth Masses). To our amazement, it seems like the alien description of our solar system found on the Akkadian Seal is a correct one. It appears that:
- Alien visitations to Earth with the express purpose of warning us of comet impact driven catastrophes has occurred in both the past and present and has been covered up by our deceitful leaders (like Gilgamesh/AKA Noah and present world governments).
- An unusual astrology (the Chinese Tzu Wei which, along with physical data, has been used to derive Vulcan's orbit) are a valid science and may be used by nation states for military/political purposes. The statistical analysis of the past geo- climatological data supports Vulcan's orbit.
- The initial list of IRAS objects (of which IRAS 1732+239 is one) was not examined with before and after photographs to determine if angular motion or even parallax was present. Since a Dark Star in an elliptical orbit would obviously represent a monumental threat to mankind, why was there no such investigation?
- The Bible Code predictions of seven impacts over the next 130 years correspond to Tollmann's seven impacts found in a similar cosmic alignment of Vulcan and the Sun. Two of the Biblical dates (2030 and 2130) correspond to mathematically projected dates from past geo-climatological events.
While we could eventually deter these impacts with an aggressive anti-comet program, we are unlikely to do so. Our Warrior species has incarnated with a negative disposition towards spiritual matters and we are likely to again revert to primitive "religious" infighting and destroy the then "ancient knowledge" (acquired today) warning us of our perilous situation. Our Earth is a dangerous little planet! | http://barry.warmkessel.com/SynopsisComet.html | 13 |
88 | In mathematics, mean has several different definitions depending on the context.
In probability and statistics, mean and expected value are used synonymously to refer to one measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In the case of a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability P(x), and then adding all these products together, giving . An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean; see the Cauchy distribution for an example. Moreover, for some distributions the mean is infinite: for example, when the probability of the value is for n = 1, 2, 3, ....
For a data set, the terms arithmetic mean, mathematical expectation, and sometimes average are used synonymously to refer to a central value of a discrete set of numbers: specifically, the sum of the values divided by the number of values. The arithmetic mean of a set of numbers x1, x2, ..., xn is typically denoted by , pronounced "x bar". If the data set were based on a series of observations obtained by sampling from a statistical population, the arithmetic mean is termed the sample mean (denoted ) to distinguish it from the population mean (denoted or ).
For a finite population, the population mean of a property is equal to the arithmetic mean of the given property while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples. The law of large numbers dictates that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.
Types of mean
Pythagorean means
|It has been suggested that portions of this section be moved into Pythagorean means. (Discuss)|
Arithmetic mean (AM)
The arithmetic mean is the "standard" average, often simply called the "mean".
For example, the arithmetic mean of five values: 4, 36, 45, 50, 75 is
The mean may often be confused with the median, mode or range. The mean is the arithmetic average of a set of values, or distribution; however, for skewed distributions, the mean is not necessarily the same as the middle value (median), or the most likely (mode). For example, mean income is skewed upwards by a small number of people with very large incomes, so that the majority have an income lower than the mean. By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income, and favors the larger number of people with lower incomes. The median or mode are often more intuitive measures of such data.
Geometric mean (GM)
The geometric mean is an average that is useful for sets of positive numbers that are interpreted according to their product and not their sum (as is the case with the arithmetic mean) e.g. rates of growth.
For example, the geometric mean of five values: 4, 36, 45, 50, 75 is:
Harmonic mean (HM)
For example, the harmonic mean of the five values: 4, 36, 45, 50, 75 is
Relationship between AM, GM, and HM
AM, GM, and HM satisfy these inequalities:
Equality holds only when all the elements of the given sample are equal.
Generalized means
Power mean
The generalized mean, also known as the power mean or Hölder mean, is an abstraction of the quadratic, arithmetic, geometric and harmonic means. It is defined for a set of n positive numbers xi by
By choosing different values for the parameter m, the following types of means are obtained:
This can be generalized further as the generalized f-mean
and again a suitable choice of an invertible ƒ will give
Weighted arithmetic mean
The weighted arithmetic mean (or weighted average) is used if one wants to combine average values from samples of the same population with different sample sizes:
The weights represent the sizes of the different samples. In other applications they represent a measure for the reliability of the influence upon the mean by the respective values.
Truncated mean
Sometimes a set of numbers might contain outliers, i.e., data values which are much lower or much higher than the others. Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean. It involves discarding given parts of the data at the top or the bottom end, typically an equal amount at each end, and then taking the arithmetic mean of the remaining data. The number of values removed is indicated as a percentage of total number of values.
Interquartile mean
The interquartile mean is a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values.
assuming the values have been ordered, so is simply a specific example of a weighted mean for a specific set of weights.
Mean of a function
|It has been suggested that portions of Average#Average values of functions be moved or incorporated into this section. (Discuss)|
In calculus, and especially multivariable calculus, the mean of a function is loosely defined as the average value of the function over its domain. In one variable, the mean of a function f(x) over the interval (a,b) is defined by
Recall that a defining property of the average value of finitely many numbers is that . In other words, is the constant value which when added to itself times equals the result of adding the terms of . By analogy, a defining property of the average value of a function over the interval is that
In other words, is the constant value which when integrated over equals the result of integrating over . But by the second fundamental theorem of calculus, the integral of a constant is just
See also the first mean value theorem for integration, which guarantees that if is continuous then there exists a point such that
The point is called the mean value of on . So we write and rearrange the preceding equation to get the above definition.
This generalizes the arithmetic mean. On the other hand, it is also possible to generalize the geometric mean to functions by defining the geometric mean of f to be
More generally, in measure theory and probability theory, either sort of mean plays an important role. In this context, Jensen's inequality places sharp estimates on the relationship between these two different notions of the mean of a function.
There is also a harmonic average of functions and a quadratic average (or root mean square) of functions.
Mean of a probability distribution
See expected value.
Mean of angles
Sometimes the usual calculations of means fail on cyclical quantities such as angles, times of day, and other situations where modular arithmetic is used. For those quantities it might be appropriate to use a mean of circular quantities to take account of the modular values, or to adjust the values before calculating the mean.
Fréchet mean
The Fréchet mean gives a manner for determining the "center" of a mass distribution on a surface or, more generally, Riemannian manifold. Unlike many other means, the Fréchet mean is defined on a space whose elements cannot necessarily be added together or multiplied by scalars. It is sometimes also known as the Karcher mean (named after Hermann Karcher).
Other means
- Arithmetic-geometric mean
- Arithmetic-harmonic mean
- Cesàro mean
- Chisini mean
- Contraharmonic mean
- Distance-weighted estimator
- Elementary symmetric mean
- Geometric-harmonic mean
- Heinz mean
- Heronian mean
- Identric mean
- Lehmer mean
- Logarithmic mean
- Moving average
- Root mean square
- Rényi's entropy (a generalized f-mean)
- Stolarsky mean
- Weighted geometric mean
- Weighted harmonic mean
All means share some properties and additional properties are shared by the most common means. Some of these properties are collected here.
Weighted and unweighted means
Weighted mean
|It has been suggested that portions of this section be moved into Weighted mean. (Discuss)|
A weighted mean M is a function which maps tuples of positive numbers to a positive number
such that the following properties hold:
- "Fixed point": M(1,1,...,1) = 1
- Homogeneity: M(λ x1, ..., λ xn) = λ M(x1, ..., xn) for all λ and xi. In vector notation: M(λ x) = λ Mx for all n-vectors x.
- Monotonicity: If xi ≤ yi for each i, then Mx ≤ My
- Boundedness: min x ≤ Mx ≤ max x
- There are means which are not differentiable. For instance, the maximum number of a tuple is considered a mean (as an extreme case of the power mean, or as a special case of a median), but is not differentiable.
- All means listed above, with the exception of most of the Generalized f-means, satisfy the presented properties.
- If f is bijective, then the generalized f-mean satisfies the fixed point property.
- If f is strictly monotonic, then the generalized f-mean satisfy also the monotony property.
- In general a generalized f-mean will miss homogeneity.
The above properties imply techniques to construct more complex means:
If C, M1, ..., Mm are weighted means and p is a positive real number, then A and B defined by
are also weighted means.
Unweighted mean
Intuitively spoken, an unweighted mean is a weighted mean with equal weights. Since our definition of weighted mean above does not expose particular weights, equal weights must be asserted by a different way. A different view on homogeneous weighting is, that the inputs can be swapped without altering the result.
Thus we define M to be an unweighted mean if it is a weighted mean and for each permutation π of inputs, the result is the same.
- Symmetry: Mx = M(πx) for all n-tuples x and permutations π on n-tuples.
Analogously to the weighted means, if C is a weighted mean and M1, ..., Mm are unweighted means and p is a positive real number, then A and B defined by
are also unweighted means.
Converting unweighted mean to weighted mean
An unweighted mean can be turned into a weighted mean by repeating elements. This connection can also be used to state that a mean is the weighted version of an unweighted mean. Say you have the unweighted mean M and weight the numbers by natural numbers . (If the numbers are rational, then multiply them with the least common denominator.) Then the corresponding weighted mean A is obtained by
Means of tuples of different sizes
If a mean M is defined for tuples of several sizes, then one also expects that the mean of a tuple is bounded by the means of partitions. More precisely
- Given an arbitrary tuple x, which is partitioned into y1, ..., yk, then
- (See Convex hull.)
Distribution of the population mean
Using the sample mean
The arithmetic mean of a population, or population mean, is denoted μ. The sample mean (the arithmetic mean of a sample of values drawn from the population) makes a good estimator of the population mean, as its expected value is equal to the population mean (that is, it is an unbiased estimator). The sample mean is a random variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample of n observations from a normally distributed population, the sample mean distribution is normally distributed with mean and variance as follows:
Often, since the population variance is an unknown parameter, it is estimated by the mean sum of squares; when this estimated value is used, the distribution of the sample mean is no longer a normal distribution but rather a Student's t distribution with n − 1 degrees of freedom.
Using a very small sample
|It has been suggested that portions of this section be moved into Standard error of the mean#Correction for finite population. (Discuss)|
Small sample sizes occur in practice and present unusually difficult problems for parameter estimation.
It is intuitive but false that from a single (n = 1) observation x, information about the variability in the population cannot be gained and consequently finite-length confidence intervals for the population mean and/or variance are impossible even in principle. Where the shape of the population distribution is known, some estimates are possible:
For a normally distributed variate, the confidence intervals for the (arithmetic) population mean at the 90% level have been shown to be x ± 5.84 |x| where |.| is the absolute value. The 95% bound for a normally distributed variate is x ± 9.68 |x| and that for a 99% confidence interval is x ± 48.39 |x|. These confidence intervals apply because for every true but unobserved parametrization of the normal distribution, the probability that the indicated confidence interval, computed from the random sample of one, encompasses the fixed true mean is at least the indicated percentage, and for the worst-case true parametrization it is exactly the indicated percentage.
The estimate derived from this method shows behavior that is atypical of more conventional methods. A value of 0 for the population mean cannot be rejected with any level of confidence. If x = 0, the confidence interval collapses to a length of 0. Finally the confidence interval is not stable under a linear transform x → ax + b where a and b are constants.
Machol has shown that given a known density symmetrical about 0 and a single sample value ( x ), the 90% confidence interval of the population mean is
where ν is the population median.
For a sample size of two ( n = 2 ), the population mean is bounded by
where x1, x2 are the variate values, μ is the population mean and k is a constant that depends on the underlying distribution. For the normal distribution, k = cotangent( π α / 2 ), in which case for α = 0.05, k = 12.71. For the rectangular distribution, k = ( 1 / α ) - 1, in which case for α = 0.05, k = 19.
For a sample size of three ( n = 3 ), the confidence intervals for the population mean are
where m is the sample mean, s is the sample standard deviation and k is a constant that depends on the distribution. For the normal distribution, k is approximately 1 / √α - 3 √α / 4 + ... When α = 0.05, k = 4.30. For the rectangular distribution with α = 0.05, k = 5.74.
The pivot depth (j) is int( ( n + 1 ) / 2 ) / 2 or int( ( n + 1 ) / 2 + 1 ) / 2 depending on which value is an integer. The lower pivot is xL = xj and the upper pivot xU is xn + 1 - j. The pivot half sum (P) is
and the pivot range (R) is
The confidence intervals for the population mean are then
where t is the value of the t test at 100( 1 - α / 2 )%.
The pivot statistic T = P / R has an approximately symmetrical distribution and its values for 4 ≤ n ≤ 20 for a number of values of 1 - α are given in Table 2 of Meloun et al.
See also
- Algorithms for calculating variance
- Central tendency
- Descriptive statistics
- Law of averages
- Mean value theorem
- Mode (statistics)
- Spherical mean
- Summary statistics
- Taylor's law
- Feller, William (1950). Introduction to Probability Theory and its Applications, Vol I. Wiley. p. 221. ISBN 0471257087.
- Elementary Statistics by Robert R. Johnson and Patricia J. Kuby, p. 279
- Underhill, L.G.; Bradfield d. (1998) Introstat, Juta and Company Ltd. ISBN 0-7021-3838-X p. 181
- Schaum's Outline of Theory and Problems of Probability by Seymour Lipschutz and Marc Lipson, p. 141
- Abbot JH, Rosenblatt J (1963) Two stage estimation with one observation on the first stage. Annals of the Institute of Statistical Mathematics 14: 229-235
- Blachman NM, Machol R (1987) Confidence intervals based on one or more observations. IEEE Transactions on Information Theory 33(3): 373-382
- Wall MM, Boen J, Tweedie R (2001) An effective confidence interval for the mean With samples of size one and two. The American Statistician http://www.biostat.umn.edu/ftp/pub/2000/rr2000-007.pdf
- Vos P (2002) The Am Stat 56 (1) 80
- Wittkowski KN (2002) The Am Stat 56 (1) 80
- Wall ME (2002) The Am Stat 56 (1) 80-81
- Machol R (1964) IEEE Trans Info Theor
- Blackman NM, Machol RE (1987) IEEE Trans on inform theory 33:373
- Horn PS, Pesce AJ, Copeland BE (1998) A robust approach to reference interval estimation and evaluation. Clin Chem 44:622–631
- Horn J (1983) Some easy T-statistics. J Am Statist Assoc 78:930
- Meloun M, Hill M, Militký J, Kupka K (2001) Analysis of large and small samples of biochemical and clinical data. Clin Chem Lab Med 39(1):53–61
- Weisstein, Eric W., "Mean", MathWorld.
- Weisstein, Eric W., "Arithmetic Mean", MathWorld.
- Comparison between arithmetic and geometric mean of two numbers
- Some relationships involving means | http://en.wikipedia.org/wiki/Mean | 13 |
52 | From Wikipedia, the free encyclopedia
A molecular vibration occurs when atoms in a molecule are in periodic motion while the molecule as a whole has constant translational and rotational motion. The frequency of the periodic motion is known as a vibration frequency. A nonlinear molecule with n atoms has 3n−6 normal modes of vibration, whereas a linear molecule has 3n−5 normal modes of vibration because rotation about its molecular axis cannot be observed. A diatomic molecule thus has one normal mode of vibration. The normal modes of vibration of polyatomic molecules are independent of each other, each involving simultaneous vibrations of different parts of the molecule.
A molecular vibration is excited when the molecule absorbs a quantum of energy, E, corresponding to the vibration's frequency, ν, according to the relation E=hν, where h is Planck's constant. A fundamental vibration is excited when one such quantum of energy is absorbed by the molecule in its ground state. When two quanta are absorbed the first overtone is excited, and so on to higher overtones.
To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, as the potential energy of the molecule is more like a Morse potential.
The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly.
Vibrational excitation can occur in conjunction with electronic excitation (vibronic transition), giving vibrational fine structure to electronic transitions, particularly with molecules in the gas state.
Simultaneous excitation of a vibration and rotations gives rise to vibration-rotation spectra.
The coordinate of a normal vibration is a combination of changes in the positions of atoms in the molecule. When the vibration is excited the coordinate changes sinusoidally with a frequency ν, the frequency of the vibration.
Internal coordinatesInternal coordinates are of the following types, illustrated with reference to the planar molecule ethylene,
- Stretching: a change in the length of a bond, such as C-H or C-C
- Bending: a change in the angle between two bonds, such as the HCH angle in a methylene group
- Rocking: a change in angle between a group of atoms, such as a methylene group and the rest of the molecule.
- Wagging: a change in angle between the plane of a group of atoms, such as a methylene group and a plane through the rest of the molecule,
- Twisting: a change in the angle between the planes of two groups of atoms, such as a change in the angle between the two methylene groups.
- Out-of-plane: Not present in ethene, but an example is in BF3 when the boron atom moves in and out of the plane of the three fluorine atoms.
In a rocking, wagging or twisting coordinate the bond lengths within the groups involved do not change. The angles do. Rocking is distinguished from wagging by the fact that the atoms in the group stay in the same plane.
In ethene there are 12 internal coordinates: 4 C-H stretching, 1 C-C stretching, 2 H-C-H bending, 2 CH2 rocking, 2 CH2 wagging, 1 twisting. Note that the H-C-C angles cannot be used as internal coordinates as the angles at each carbon atom cannot all increase at the same time.
See infrared spectroscopy for some animated illustrations of internal coordinates.
Symmetry-adapted coordinates may be created by applying a projection operator to a set of internal coordinates. The projection operator is constructed with the aid of the character table of the molecular point group. For example, the four(un-normalised) C-H stretching coordinates of the molecule ethene are given by
- Qs1 = q1 + q2 + q3 + q4
- Qs2 = q1 + q2 - q3 - q4
- Qs3 = q1 - q2 + q3 - q4
- Qs4 = q1 - q2 - q3 + q4
where q1 - q4 are the internal coordinates for stretching of each of the four C-H bonds.
Illustrations of symmetry-adapted coordinates for most small molecules can be found in Nakamoto.
A normal coordinate, Q, may sometimes be constructed directly as a symmetry-adapted coordinate. This is possible when the normal coordinate belongs uniquely to a particular irreducible representation of the molecular point group. For example, the symmetry-adapted coordinates for bond-stretching of the linear carbon dioxide molecule, O=C=O are both normal coordinates:
- symmetric stretching: the sum of the two C-O stretching coordinates; the two C-O bond lengths change by the same amount and the carbon atom is stationary. Q = q1 + q2
- asymmetric stretching: the difference of the two C-O stretching coordinates; one C-O bond length increases while the other decreases. Q = q1 - q2
When two or more normal coordinates belong to the same irreducible representation of the molecular point group (colloquially, have the same symmetry) there is "mixing" and the coefficients of the combination cannot be determined a priori. For example, in the linear molecule hydrogen cyanide, HCN, The two stretching vibrations are
- principally C-H stretching with a little C-N stretching; Q1 = q1 + a q2 (a << 1)
- principally C-N stretching with a little C-H stretching; Q2 = b q1 + q2 (b << 1)
Perhaps surprisingly, molecular vibrations can be treated using Newtonian mechanics, to calculate the correct vibration frequencies. The basic assumption is that each vibration can be treated as though it corresponds to a spring. In the harmonic approximation the spring obeys Hooke's law: the force required to extend the spring is proportional to the extension. The proportionality constant is known as a force constant, k. The anharmonic oscillator is considered elsewhere.
By Newton’s second law of motion this force is also equal to a "mass", m, times acceleration.
Since this is one and the same force the ordinary differential equation follows.
The solution to this equation of simple harmonic motion is
A is the maximum amplitude of the vibration coordinate Q. It remains to define the "mass", m. In a homonuclear diatomic molecule such as N2, it is half the mass of one atom. In a heteronuclear diatomic molecule, AB, it is the reduced mass, μ given by
The use of the reduced mass ensures that the centre of mass of the molecule is not affected by the vibration. In the harmonic approximation the potential energy of the molecule is a quadratic function of the normal coordinate. It follows that the force-constant is equal to the second derivative of the potential energy.
When two or more normal vibrations have the same symmetry a full normal coordinate analysis must be performed (see GF method). The vibration frequencies,νi are obtained from the eigenvalues,λi, of the matrix product GF. G is a matrix of numbers derived from the masses of the atoms and the geometry of the molecule. F is a matrix derived from force-constant values. Details concerning the determination of the eigenvalues can be found in .
In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
where n is a quantum number that can take values of 0, 1, 2 ... The difference in energy when n changes by 1 are therefore equal to the energy derived using classical mechanics. See quantum harmonic oscillator for graphs of the first 5 wave functions.Knowing the wave functions, certain selection rules can be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one,
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states n=2 and n=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band.
In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate. The intensity of Raman bands depends on polarizability. See also transition dipole moment.
- Infrared spectroscopy
- Near infrared spectroscopy
- Raman spectroscopy
- Resonance Raman spectroscopy
- Coherent anti-Stokes Raman spectroscopy
- Eckart conditions
- FG method
- Fermi resonance
- Lennard-Jones potential
- ^ Landau LD and Lifshitz EM (1976) Mechanics, 3rd. ed., Pergamon Press. ISBN 0-08-021022-8 (hardcover) and ISBN 0-08-029141-4 (softcover)
- ^ F.A. Cotton Chemical applications of group theory, Wiley, 1962, 1971
- ^ K. Nakamoto Infrared and Raman spectra of inorganic and coordination compounds, 5th. edition, Part A, Wiley, 1997
- ^ a b E.B. Wilson, J.C. Decius and P.C. Cross, Molecular vibrations, McGraw-Hill, 1955. (Reprinted by Dover 1980)
- ^ S. Califano, Vibrational states, Wiley, 1976
- ^ P. Gans, Vibrating molecules, Chapman and Hall, 1971
- ^ D. Steele, Theory of vibrational spectroscopy, W.B. Saunders, 1971
- P.M.A. Sherwood, Vibrational spectroscopy of solids, Cambridge University Press, 1972
- Free Molecular Vibration code developed by Zs. Szabó and R. Scipioni
- Molecular vibration and absorption
- small explanation of vibrational spectra and a table including force constants.
- Character tables for chemically important point groups | http://dictionary.sensagent.com/Molecular_vibration/en-en/ | 13 |
64 | The conservation of mass is a fundamental
concept of physics. Within some problem domain, the amount of mass
remains constant; mass is neither created or destroyed. The
mass of any object is simply the
that the object
occupies times the density of the object.
For a fluid (a liquid or a gas) the
density, volume, and shape of the object can all change within the
domain with time and mass can move through the domain.
The conservation of mass equation tells us that the
mass flow rate through a tube is a constant
and equal to the product of the density, velocity, and flow area.
When we consider the effects of the
compressibility of a gas at high speeds
we obtain the
compressible mass flow
equation shown on the slide.
mdot = (A * pt/sqrt[Tt]) * sqrt(g/R) * M * [1 + M^2 * (g-1)/2]^-[(g+1)/(g-1)/2]
The mass flow rate mdot depends on some properties of the gas
(total pressure pt, total temperature Tt,
gas constant R,
ratio of specific heatsg),
the area A of the tube, and the Mach number M.
With some additional work, we can derive a very useful function which depends
only on the
We begin with the compressible mass flow equation and,
using algebra, we divide both sides of the equation by the area, multiply both
sides by the square root of the total temperature, and divide both sides by the total pressure.
We then multiply both sides by the gravitational constant g0
times a reference pressure p0
divided by the square root of a reference temperature T0.
The mass flow rate is changed to a weight flow rate wdot
and we assign special symbols; theta to the ratio of the temperature to
the reference temperature, and delta to the ratio of the pressure to
the reference pressure. The resulting equation is:
The right side of the equation now contains the Mach number, the
gas constant, the ratio of specific heats, the gravitational constant and the reference
pressure and temperature.
We pick the reference conditions to be
sea level static
conditions and the
values for the gas properties are given below:
g0 = 32.2 ft/sec^2
p0 = 14.7 lb/in^2
T0 = 518 degrees Rankine
R = 1716 ft^2/sec^2 degree Rankine
g = 1.4
The temperature ratio theta
and the pressure ratio delta are just numbers; they have no
The quantity "w dot times the square root of theta divided by delta" is called the
corrected weight flow. If we substitute the numbers given above, we obtain:
The corrected weight flow per unit area is just a function of the Mach number.
What good is all this? The multiplier (.59352) is computed from reference conditions,
it never changes. The Mach number is a dimensionless property of the flow. We can
compute a value for the corrected airflow per unit area at any location in the flow domain.
Similarly, if we have a value for the corrected airflow per unit area, we can determine
the Mach number (and the velocity) at that location. For some problems, like an ideal nozzle,
the area changes but the weight flow stays the same. Using this function we can easily
determine the speed at any location. If we add (or subtract) mass, we can recompute the
corrected weight flow per area and still determine the velocity. If a shock is present,
the total pressure changes. Using the corrected weight flow per unit area function we can
determine how the velocity (Mach number) changes by multiplying by the total pressure
ratio. If heat is added, or work is done, which changes the total temperature, we can
again determine the effects on velocity. This function is extensively used by propulsion
engineers to quickly solve duct and nozzle problems.
Here is a Java program that solves the corrected airflow equation.
By default, the program Input Variable is the
of the flow. Since the corrected airflow per area depends only on the Mach number,
the program can calculate the value of the airflow per
area and display the results on the right side of the output
variables. You can also have the program solve for the Mach number
that produces a desired value of flow per area.
Using the choice button labeled Input Variable, select "Flow per Area Wcor/A".
Next to the selection, you then type in a value for Wcor/A.
When you hit the red COMPUTE button,
the output values change. The corrected airflow is double valued;
for the same airflow ratio, there is a subsonic
and a supersonic solution. The choice button at the right top selects
the solution that is presented.
If you are an experienced user of this calculator, you can use a
of the program which loads faster on your computer and does not include these instructions.
You can also download your own copy of the program to run off-line by clicking on this button: | http://www.grc.nasa.gov/WWW/K-12/airplane/wcora.html | 13 |
54 | The Space Shuttle was conceived as a way of cheaply ferrying crews and cargo to an Earth-orbiting space station. Shuttle and Station were meant to work together to carry out a well-defined mission, much as the Apollo Command and Service Module (CSM) and Lunar Module (LM) spacecraft were meant to work together to land astronauts on the moon. Shuttle-Station’s mission was to enable astronauts to conduct long-term research in space. Such research would, it was expected, yield practical benefits, abstract scientific knowledge, improved space technology, experience with the physical and psychological effects of long-duration stays in space, and countermeasures for those effects that would make possible piloted voyages beyond the moon.
A space station had long been seen as an early step into the cosmos. Domestic and international political considerations had, however, preempted space station development in May 1961, when new President John F. Kennedy, smarting from the failed Bay of Pigs invasion of Cuba and continuing Soviet space victories, placed NASA on course for the moon. Up to that time, the Apollo CSM spacecraft had been seen mainly as a miniature space station and a space station ferry.
By mid-1968, with the moon close at hand, it was time to find NASA a new goal so that the expertise and infrastructure developed during Apollo would not be lost. Many in NASA felt that space station development should resume in earnest.
Below is a list of links to Beyond Apollo posts related to Shuttle and Station spanning from 1962 to 1992. Proposed space stations often appear paired with crew and cargo spacecraft other than the Shuttle Orbiter, such as modified Apollo CSMs. The Nixon Administration approved the Shuttle in early 1972, but not the Station it was meant to serve. This placed NASA in the unenviable position of inventing tasks for the Shuttle. Because of this, it sometimes appears coupled with systems for giving it enhanced functionality; for example, a solar power module designed to turn the Shuttle Orbiter into a makeshift space station.
At liftoff, the Shuttle stack comprised twin reusable Solid Rocket Boosters (SRBs), a reusable manned Orbiter with a 15-by-60-foot payload bay and three Space Shuttle Main Engines (SSMEs), and an expendable External Tank (ET) containing liquid hydrogen and liquid oxygen propellants for the SSMEs. The STS also included upper stages for launching spacecraft carried in the Orbiter’s payload bay to places beyond the Shuttle’s maximum orbital altitude. Until the mid-1980s, many in NASA hoped that a reusable Space Tug would eventually replace the expendable upper stages.
Early concept art of Jupiter Orbiter and Probe (JOP). Image: NASA
At the start of STS-23 (and, indeed, all STS missions), three Space Shuttle Main Engines (SSMEs) and twin Solid Rocket Boosters (SRBs) would ignite to push the Shuttle stack off the launch pad. The SSMEs, mounted at the tail of the Orbiter, would draw liquid hydrogen/liquid oxygen propellants from the large External Tank (ET) to which the Orbiter and SRB separation would occur 128 seconds after liftoff at an altitude of about 155,900 feet and a speed of about 4417 feet per second.
The three SSMEs would operate until 510 seconds after liftoff, by which time the Orbiter and its expendable External Tank (ET) would be 362,600 feet above the Earth, traveling at a speed of about 24,310 feet per second. The SSMEs would then shut down and the ET, which would separate, tumble, and reenter the atmosphere over the Indian Ocean. The Orbiter, meanwhile, would ignite its twin Orbital Maneuvering System engines to circularize its orbit above the atmosphere.
After the STS-23 Shuttle Orbiter reached 150-nautical-mile-high low-Earth orbit (LEO), its crew would open its payload bay doors and release JOP and its three-stage solid-propellant Interim Upper Stage (IUS). After the Orbiter moved a safe distance away, the IUS would ignite to begin JOP’s two-year direct voyage to Jupiter.
In February 1978, NASA gave JOP the name Galileo. Largely because of its reliance on the STS, Galileo suffered a series of costly delays, redesigns, and Earth-Jupiter trajectory changes. The first of these was, however, not the fault of the STS. As Galileo’s design firmed up, it put on weight, and was soon too heavy for the three-stage IUS to launch directly to Jupiter.
In January 1980, NASA decided to split Galileo into two spacecraft. The first, the Jupiter Orbiter, would leave Earth in February 1984. The second, an interplanetary bus carrying Galileo’s Jupiter atmosphere probe, would launch the following month. They would each depart LEO on a three-stage IUS and arrive at Jupiter in late 1986 and early 1987, respectively.
In late 1980, under pressure from Congress, NASA opted to launch the Galileo Orbiter and Probe out of LEO together on a liquid hydrogen/liquid oxygen-fueled Centaur G-prime upper stage. Centaur, a mainstay of robotic lunar and planetary programs since the 1960s, was expected to provide 50% more thrust than the three-stage IUS. Modifying it so that it could fly safely in the Shuttle Orbiter’s payload bay would, however, delay Galileo’s Earth departure until April 1985. The spacecraft would arrive at Jupiter in 1987.
Another delay resulted when David Stockman, director of President Ronald Reagan’s Office of Management and Budget, put Galileo on his “hit list” of Federal government projects to be scrapped in Fiscal Year 1982. The planetary science community campaigned successfully to save Galileo, but NASA lost the Centaur G-prime and three-stage IUS. The latter had been plagued by development delays.
In January 1982, NASA announced that Galileo would depart Earth orbit in April 1985 on a two-stage IUS with a solid-propellant kick stage. The spacecraft would then circle the Sun and fly past Earth for a gravity-assist that would place it on course for Jupiter. The new plan would add three years to Galileo’s flight time, postponing its arrival at Jupiter until 1990.
In July 1982, Congress overruled the Reagan White House when it mandated that NASA launch Galileo from LEO on a Centaur G-prime. The move would postpone its launch to 20 May 1986; however, because the Centaur could boost Galileo directly to Jupiter, it would reach its goal in 1988, not 1990. NASA designated the STS mission meant to launch Galileo STS-61G.
There matters rested until 28 January 1986, when, 73 seconds into mission STS-51L, the Orbiter Challenger was destroyed. A joint between two of the cylindrical segments making up the Shuttle stack’s right SRB leaked hot gases that rapidly eroded O-ring seals. A torch-like plume formed and impinged on the ET and the lower strut linking the ET to the SRB. The plume breached and weakened the ET’s liquid hydrogen tank, causing the strut to separate. Still firing – for a solid-rocket motor cannot be turned off once ignited – the right SRB pivoted on its upper attachment and crushed the ET’s liquid oxygen tank. Hydrogen and oxygen mixed and ignited in a giant fireball.
Appearances notwithstanding, Challenger did not explode. Instead, the Orbiter began a tumble while moving at about twice the speed of sound in a relatively dense part of Earth’s atmosphere. This subjected it to severe aerodynamic loads, causing it to break into several large pieces. The pieces, which included the crew compartment and the tail section with its three SSMEs, emerged from the fireball more or less intact. The mission’s main payload, the TDRS-B data relay satellite, remained attached to its two-stage IUS as Challenger‘s payload bay disintegrated around it.
The pieces arced upward for a time, reaching a maximum altitude of about 50,000 feet, then fell, tumbling, to crash into the Atlantic Ocean within view of the Shuttle launch pads at Kennedy Space Center, Florida. The crew compartment impacted 165 seconds after Challenger broke apart and sank in water about 100 feet deep.
After Challenger: technicians prepare the Galileo spacecraft for a six-year journey to Jupiter with Venus and Earth gravity-assists. Image: NASA
NASA grounded the STS for 32 months. During that period, it put in place new flight rules, abandoned potentially hazardous systems and missions, and, where possible, modified STS systems to help improve crew safety. On 19 June 1986, NASA canceled the Shuttle-launched Centaur G-prime. On 26 November 1986, it announced that a two-stage IUS would launch Galileo out of LEO. The Jupiter spacecraft would then perform gravity-assist flybys of Venus and Earth. On 15 March 1988, NASA scheduled Galileo’s launch for October 1989, with arrival at Jupiter in December 1995.
One month after NASA unveiled Galileo’s newest flight plan, Angus McRonald, an engineer at the Jet Propulsion Laboratory (JPL) in Pasadena, California, completed a brief report on the possible effects on Galileo and its IUS of a Shuttle accident during the 382-second period between SRB separation and SSME cutoff. McRonald was not specific about the nature of the “fault” that would produce such an accident, though he assumed that the Shuttle Orbiter would be separated from the ET and tumbling out of control. He based his analysis on data provided by NASA Johnson Space Center in Houston, Texas, where the Space Shuttle Program was managed.
McRonald also examined the effects of aerodynamic heating on Galileo’s twin electricity-generating Radioisotope Thermoelectric Generators (RTGs). The RTGs would each carry 18 General Purpose Heat Source (GPHS) modules containing four iridium-clad plutonium dioxide pellets each. The GPHS modules were encased in graphite and housed in protective aeroshells, making them unlikely to melt following an accident during Shuttle ascent. In all, Galileo would carry 34.4 pounds of plutonium.
McRonald assumed that both the Shuttle Orbiter and the Galileo/IUS combination would break up when subjected to atmospheric drag deceleration equal to 3.5 times the pull of gravity at Earth’s surface. Based on this, he determined that the Orbiter and its Galileo/IUS payload would always break up if a fault leading to “loss of control” occurred following SRB separation.
The Shuttle Orbiter would not break up as soon as loss of control occurred, however. At SRB separation altitude, atmospheric density would be low enough that the spacecraft would be subjected to only about 1% of the drag that tore apart Challenger. McRonald determined that the Shuttle Orbiter would ascend unpowered and tumbling, attain a maximum altitude, and fall back into the atmosphere, where drag would rip it apart.
He calculated that, for a fault that occurred 128 seconds after liftoff – that is, at the time the SRBs separated – the Shuttle Orbiter would break up as it fell back to 101,000 feet of altitude. The Galileo/IUS combination would fall free of the disintegrating Orbiter and break up at 90,000 feet, then the RTGs would fall to Earth without melting. Impact would take place in the Atlantic about 150 miles off the Florida coast.
The Space Shuttle Orbiter Atlantis pirouettes for observers on the International Space Station during mission STS-117 (10-19 June 2007). Image: NASA
For an intermediate case - for example, if a fault leading to loss of control occurred 260 seconds after launch at 323,800 feet of altitude and a speed of 7957 feet per second – then the Shuttle Orbiter would break up when it fell back to 123,000 feet. Galileo and its IUS would break up at 116,000 feet, and the RTG cases would melt and release the GPHS modules between 84,000 and 62,000 feet. Impact would occur in the Atlantic about 400 miles from Florida.
A fault that took place within 100 seconds of planned SSME cutoff – for example, one that caused loss of control 420 seconds after launch at 353,700 feet of altitude and at a speed of 20,100 feet per second - would result in an impact far downrange because the Shuttle Orbiter would be accelerating almost parallel to Earth’s surface when it occurred. McRonald calculated that Orbiter breakup would take place at 165,000 feet and the Galileo/IUS combination would break up at 155,000 feet.
McRonald found, surprisingly, that Galileo’s RTG cases might already have melted and released their GPHS modules by the time Galileo and the IUS disintegrated. He estimated that the RTGs would melt between 160,000 and 151,000 feet of altitude. Impact would occur about 1500 miles from Kennedy Space Center in the Atlantic west of Africa.
Impact points for accidents between 460 seconds and SSME cutoff at 510 seconds would be difficult to predict, McRonald noted. He estimated, however, that loss of control 510 seconds after liftoff would lead to wreckage falling in Africa, about 4600 miles downrange.
McRonald determined that Galileo’s RTG cases would always reach Earth’s surface intact if an accident leading to loss of control occurred between 128 and 155 seconds after liftoff. If the accident occurred between 155 and 210 seconds after launch, then Galileo’s RTG cases “probably” would not melt. If it occurred 210 seconds after launch or later, then the RTG cases would always melt and release the GPHS modules.
Galileo departs Atlantis’s payload bay on 18 October 1989. Image: NASA
STS flights resumed in September 1988 with the launch of the Orbiter Discovery on mission STS-26. A little more than a year later (18 October 1989), the Shuttle Orbiter Atlantis roared into space at the start of STS-34 (image at top of post). A few hours after liftoff, the Galileo/two-stage IUS combination was raised out of Atlantis‘s payload bay on an IUS tilt table and released. The IUS first stage ignited a short time later to propel Galileo toward Venus.
Galileo passed Venus on 10 February 1990, adding nearly 13,000 miles per hour to its speed. It then flew past Earth on 8 December 1990, gaining enough speed to enter the Main Belt of asteroids between Mars and Jupiter, where it encountered the asteroid Gaspra on 29 October 1991.
Galileo’s second Earth flyby on 8 December 1992 placed it on course for Jupiter. The spacecraft flew past the Main Belt asteroid Ida on 28 August 1993 and had a front-row seat for the Comet Shoemaker-Levy 9 Jupiter impacts in July 1994.
Flight controllers commanded Galileo to release its Jupiter atmosphere probe on 13 July 1995. The spacecraft relayed data from the probe as it plunged into Jupiter’s atmosphere on 7 December 1995. Galileo fired its main engine the next day to slow down so that Jupiter’s gravity could capture it into orbit.
Galileo spent the next eight years touring the Jupiter system. It performed gravity-assist flybys of the four largest Jovian moons to change its Jupiter-centered orbit. Despite difficulties with its umbrella-like main antenna and its tape recorder, it returned invaluable data on Jupiter, its enormous magnetosphere, and its varied and fascinating family of moons over the course of 34 orbits about the giant planet.
As Galileo neared the end of its propellant supply, NASA decided to dispose of it to prevent it from accidentally crashing on and possibly contaminating Europa, the ice-encrusted, tidally warmed ocean moon judged by some to be of high biological potential. On 21 September 2003, the venerable spacecraft dove into Jupiter’s banded clouds and disintegrated.
Galileo: Uncontrolled STS Orbiter Reentry, JPL D-4896, Angus D. McRonald, Jet Propulsion Laboratory, 15 April 1988.
Soon after President Richard Nixon gave his blessing to the Space Shuttle Program on 5 January 1972, NASA targeted its first orbital flight for 1977, then for March 1978. By early 1975, the date had slipped to March 1979. Funding shortfalls were to blame, as were the daunting technical challenges of developing the world’s first reusable orbital spaceship with 1970s technology. The schedule slip was actually worse than NASA let on: as early as 31 January 1975, an internal NASA document gave a “90% probability date” for the first Shuttle launch of December 1979.
In October 1977, Chester Lee, director of Space Transportation System (STS) Operations at NASA Headquarters, distributed the first edition of the STS Flight Assignment Baseline, a launch schedule and payload manifest for the first 16 operational Shuttle missions. The document was in keeping with NASA’s stated philosophy that reusable Shuttle Orbiters would fly on-time and often, like a fleet of cargo airplanes. The STS Utilization and Operations Office at NASA’s Johnson Space Center (JSC) in Houston had prepared the document, which was meant to be revised quarterly as new customers chose the Space Shuttle as their cheap and reliable ride to space.
The JSC planners assumed that six Orbital Flight Test (OFT) missions would precede the first operational Shuttle flight. The OFT flights would see two-man crews (Commander and Pilot) put Orbiter Vehicle (OV)-102 through its paces in low-Earth orbit. The planners did not include the OFT schedule in their document, but the 30 May 1980 launch date for their first operational Shuttle mission suggests that they based their flight schedule on the March 1979 first OFT mission date.
Thirteen of the 16 operational flights would use OV-102 and three would use OV-101. NASA would christen OV-102 Columbia in February 1979, shortly before it rolled out of the Rockwell International plant in Palmdale, California. As for OV-101, its name had been changed from Constitution to Enterprise in mid-1976 at the insistence of Star Trek fans (image at top of post). Enterprise flew in Approach and Landing Test (ALT) flights at Edwards Air Force Base in California beginning on 15 February 1977. ALT flights, which saw the Orbiter carried by and dropped from a modified 747, ended soon after the JSC planners released their document.
This page from the STS Flight Assignment Baseline document describes Shuttle flights 7 through 12A. Image: NASA.
The first operational Space Shuttle mission, Flight 7 (May 30-June 3, 1980), would see OV-102 climb to a 225-nautical-mile (n-mi) orbit inclined 28.5° relative to Earth’s equator (unless otherwise stated, all orbits are inclined at 28.5°, the latitude of Kennedy Space Center in Florida). The delta-winged Orbiter would carry a three-person crew in its two-deck crew compartment and the bus-sized Long Duration Exposure Facility (LDEF) in its 15-foot-wide, 60-foot-long payload bay. It would also carry a “payload of opportunity.” The presence of a payload of opportunity meant that the flight had excess payload capacity available. Payload mass up would total 27,925 pounds. Payload mass down after the Remote Manipulator System (RMS) arm had hoisted LDEF out of OV-102′s payload bay and released it into orbit would total 9080 pounds.
During Flight 8 (1-3 July 1980), OV-102 would orbit 160 n mi high. Three astronauts would release two satellites and their solid-propellant rocket stages: Tracking and Data Relay Satellite-A (TDRS-A) with a two-stage Interim Upper Stage (IUS) and the Satellite Business Systems-A (SBS-A) commercial communications satellite on a Spinning Solid Upper Stage-Delta-class (SSUS-D). Prior to release, the crew would raise TDRS-A on a tilt-table and spin the SBS-A about its long axis on a turntable to create gyroscopic stability. After release, their stages would propel them to their assigned slots in geostationary orbit (GEO), 19,323 n mi above the equator. Payload mass up would total 51,243 pounds; mass down, 8912 pounds, most of which would comprise reusable restraint and deployment hardware for the satellites.
The TDRS system, which would include three operational satellites and an orbiting spare, was meant to trim costs and improve communications coverage by replacing most of the ground-based Manned Space Flight Network (MSFN). Previous U.S. piloted missions had relied on MSFN ground stations to relay communications to and from the Mission Control Center (MCC) in Houston. Because spacecraft in low-Earth orbit could remain in range of a given ground station for only a few minutes at a time, they were frequently out of contact with the MCC.
NASA art from 1972 touts the Space Shuttle’s basic features and capabilities.
On Flight 9 (1-6 August 1980), OV-102 would climb to a 160-n-mi orbit. Three astronauts would deploy GOES-D, a National Oceanic and Atmospheric Administration (NOAA) weather satellite, and Anik-C/1, a Canadian communications satellite. Before release, the crew would raise the NOAA satellite and its SSUS-Atlas-class (SSUS-A) rocket stage on the tilt-table and spin up the Anik-C/1-SSUS-D combination on the turntable. Payload mass up would total 36,017 pounds; mass down, 21,116 pounds. JSC planners reckoned that OV-102 could carry a 14,000-pound payload of opportunity.
Following Flight 9, OV-102 would be withdrawn from service for 12 weeks to permit conversion from OFT to operational configuration. The JSC planners explained that conversion would be deferred until after Flight 9 to ensure an on-time first operational flight and to save time by combining it with Orbiter preparation for the first Spacelab mission on Flight 11. The switch from OFT to operational configuration would entail removal of Development Flight Instrumentation (sensors for monitoring Orbiter systems and performance); replacement of Commander and Pilot ejection seats on the crew compartment upper deck (the flight deck) with fixed seats; power system upgrades; and installation of an airlock on the crew compartment lower deck (the mid-deck).
Flight 10 (14-16 November 1980) would be a near-copy of Flight 8. The three-person crew of OV-102 would deploy TDRS-B/IUS and SBS-B/SSUS-D into a 160-n-mi-high orbit. The rocket stages would then boost them to GEO. Cargo mass up would total 53,744 pounds; mass down, 11,443 pounds.
Flight 11 (18-25 December 1980) would see the orbital debut of Spacelab. OV-102 would orbit Earth 160 n mi high at 57° of inclination. NASA and the multinational European Space Research Organization (ESRO) agreed in August 1973 that Europe should develop and manufacture Spacelab pressurized modules and unpressurized pallets for use in the Shuttle Program. Initially dubbed the “sortie lab,” Spacelab would operate only in the payload bay; it was not intended as an independent space station, though many hoped that it would help to demonstrate that an Earth-orbiting station could be useful. ESRO merged with the European Launcher Development Organization in 1975 to form the European Space Agency (ESA). Flight 11′s five-person crew would probably include scientists and at least one astronaut from an ESA member country.
Flight 12 (30 January-1 February 1981), a near-copy of Flights 8 and 10, would see OV-102′s three-person crew deploy TDRS-C/IUS and Anik-C/2/SSUS-D into 160-n-mi-high orbit. Payload mass up would total 53,744 pounds; mass down, 11,443 pounds.
JSC planners inserted an optional “Flight 12 Alternate” (30 January-4 February 1981) into their schedule which, if flown, would replace Flight 12. OV-102 would orbit 160 n mi above the Earth. Its three-person crew would, like that of Flight 12, deploy Anik-C/2 on an SSUS-D, but the mission’s main purpose would be to create a backup launch opportunity for an Intelsat V-class satellite scheduled for launch on a U.S. Atlas-Centaur or European Ariane I rocket. An SSUS-A stage would boost the Intelsat V to GEO. The planners assumed that, besides the satellites and stages and their support hardware, OV-102 would tote an attached payload of opportunity that would need to operate in space for five days to provide useful data (hence the mission’s planned duration). Payload mass up would total 37,067 pounds; mass down, 17,347 pounds.
Shuttle flights 13 through 18, spanning the period from March through July 1981. Image: NASA.
Flight 13 (3-8 March 1981) would see three astronauts on board OV-102 release NOAA’s GOES-E satellite on an SSUS-D stage into a 160-n-mi-high orbit. OV-102 would have room for two payloads of opportunity: one attached at the front of the payload bay and one deployed from a turntable aft of the GOES-E/SSUS-D combination. Payload mass up would total 38,549 pounds; mass down, 23,647 pounds.
Flight 14 would last 12 days, making it the longest described in the STS Flight Assignment Baseline document. Scheduled for launch on 7 April 1981, it would carry a “train” of four unpressurized Spacelab experiment pallets and an “Igloo,” a small pressurized compartment for pallet support equipment. The Igloo, though pressurized, would not be accessible to the five-person crew. OV-102 would orbit 225 n mi high at an inclination of 57°. Mass up would total 31,833 pounds; mass down, 28,450 pounds.
Flight 15 (13-15 May 1981) would be a near-copy of Flights 8, 10, and 12. OV-102 would transport to orbit a payload totaling 53,744 pounds; payload mass down would total 11,443 pounds. The JSC planners noted the possibility that none of the potential payloads for Flight 15 – TDRS-D and SBS-C or Anik-C/3 – would need to be launched as early as May 1981. TDRS-D was meant as an orbiting spare; if the first three TDRS operated as planned, its launch could be postponed. Likewise, SBS-C and Anik-C/3 were each a backup for the previously launched satellites in their series.
Flight 16 (16-23 June 1981) would be a five-person Spacelab pressurized module flight aboard OV-102 in 160-n-mi-high orbit. Payloads of opportunity totaling about 18,000 pounds might accompany the Spacelab module; for planning purposes, a satellite and SSUS-D on a turntable behind the module was assumed. Payload mass up would total 35,676 pounds; mass down, 27,995 pounds.
Flight 17, scheduled for 16-20 July 1981, would see the space debut of Enterprise and the retrieval of the LDEF released during Flight 7. Enterprise would climb to a roughly 200-n-mi-high orbit (LDEF’s altitude after 13.5 months of orbital decay would determine the mission’s precise altitude). Before rendezvous with LDEF, the three-man crew would release an Intelsat V/SSUS-A and a satellite payload of opportunity. After the satellites were sent on their way, the astronauts would pilot Enterprise to a rendezvous with LDEF, snare it with the RMS, and secure it in the payload bay. Mass up would total 26,564 pounds; mass down, 26,369 pounds.
For Flight 18 (29 July-5 August 1981), OV-102 would carry to a 160-n-mi-high orbit a Spacelab pallet dedicated to materials processing in the vacuum and microgravity of space. The three-person flight would also carry the first acknowledged Department of Defense (DOD) payload of the Shuttle Program, a U.S. Air Force pallet designated STP-P80-1. JSC planners noted cryptically that this was the Teal Ruby experiment “accommodated from OFT.” The presence of the Earth-directed Teal Ruby sensor payload accounted for Flight 18′s planned 57° orbital inclination, which would take it over most of Earth’s densely populated areas. Payload mass up might total 32,548 pounds; mass down, 23,827 pounds.
Shuttle flights 19 through 23, spanning from August 1981 to January 1982. Image: NASA.
Flight 19 (2-9 September 1981) would see five Spacelab experiment pallets fill OV-102′s payload bay. Five astronauts would operate the experiments, which would emphasize physics and astronomy. The Orbiter would circle Earth in a 216-n-mi-high orbit. Payload mass up would total 29,214 pounds; mass down, 27,522 pounds.
Flight 20 (30 September-6 October 1981), the second Enterprise mission, would see five astronauts conduct life science and astronomy experiments in a 216-n-mi-high orbit using a Spacelab pressurized module and an unpressurized pallet. JSC planners acknowledged that the mission’s down payload mass (34,248 pounds) might be “excessive,” but added that their estimate was only “based on preliminary payload data.” Mass up would total 37,065 pounds.
On Flight 21, scheduled for launch on 14 October 1981, OV-102 would carry the first Orbital Maneuvering System (OMS) Kit at the aft end of its payload bay. The OMS Kit would carry enough supplemental propellants for the Orbiter’s twin rear-mounted OMS engines to perform a velocity change of 500 feet per second. This would enable OV-102 to rendezvous with and retrieve the Solar Maximum Mission (SMM) satellite in a 300-n-mi-high orbit. Three astronauts would fly the five-day mission, which would reach the highest altitude of any in the STS Flight Assignment Baseline document. JSC planners noted that the Multi Mission Modular Spacecraft (MMS) support hardware meant to carry SMM back to Earth could also transport an MMS-type satellite into orbit. Payload mass up would total 37,145 pounds; mass down, 23,433 pounds.
On Flight 22 (25 November-2 December 1981), Enterprise might carry an ESA-sponsored Spacelab mission with a five-person crew, a pressurized lab module, and a pallet to a 155-to-177-n-mi orbit inclined at 57°. Payload mass up might total 34,031 pounds; mass down, 32,339 pounds.
During Flight 23 (5-6 January 1982), the last described in the STS Flight Assignment Baseline document, three astronauts would deploy into a 150-to-160-n-mi-high orbit the Jupiter Orbiter and Probe (JOP) spacecraft on a stack of three IUSs. President Jimmy Carter had requested new-start funds for JOP in his Fiscal Year 1978 NASA budget, which had taken effect on 1 October 1977. Because JOP was so new when they prepared their document, JSC planners declined to estimate up/down payload masses.
Flight 23 formed an anchor point for their schedule because JOP had a launch window dictated by the movements of the planets. If the automated explorer did not leave for Jupiter between 2 January 1982 and 12 January 1982, it would mean a 13-month delay while Earth and Jupiter moved into position for another launch attempt.
Almost nothing in the October 1977 STS Flight Assignment Baseline document occurred as planned. It was not even updated quarterly; no update had been issued as of mid-November 1978, by which time the target launch dates for the first Space Shuttle orbital mission and the first operational flight had slipped officially to 28 September 1979 and 27 February 1981, respectively. The first Shuttle flight, designated STS-1, did not in fact lift off until 12 April 1981. As in the STS Flight Assignment Baseline document, OV-102 Columbia performed the OFT missions; OFT concluded, however, after only four flights. After the seven-day STS-4 mission (27 June-4 July, 1982), President Ronald Reagan declared the Shuttle operational.
Space Shuttle Columbia lifts off on 12 April 1981, beginning STS-1, the first Space Shuttle mission. Image: NASA.
The first operational flight, also using Columbia, was STS-5 (11-16 November 1982). The mission launched SBS-3 and Anik-C/3; because of Shuttle delays, the other SBS and Anik-C satellites planned for Shuttle launch had already reached space atop expendable rockets.
To the chagrin of Star Trek fans, Enterprise never reached space. NASA decided that it would be less costly to convert Structural Test Article-099 into a flightworthy Orbiter than to refit Enterprise. OV-099, christened Challenger, first reached space on mission STS-6 (4-9 April 1983), which saw deployment of the first TDRS satellite.
The voluminous Spacelab pressurized module first reached orbit on board Columbia on mission STS-9 (28 November-8 December 1983). The 10-day Spacelab 1 mission included two researchers from ESA and NASA scientist-astronauts Owen Garriott and Robert Parker. Garriott, selected to be an astronaut in 1965, had flown for 59 days on board the Skylab space station in 1973. Parker had been selected in 1967, but STS-9 was his first spaceflight.
The 21,500-pound LDEF reached Earth orbit on board Challenger on STS-41C, the 11th Space Shuttle mission (6-13 April 1984). During the same mission, astronauts captured, repaired, and released the SMM satellite, which had reached orbit on 14 February1980 and malfunctioned in January 1981.
The STS Flight Assignment Baseline document assumed that 22 Shuttle flights (six OFT and 16 operational) would occur before January 1982. In fact, the 22nd Shuttle flight did not begin until October 1985, when Challenger carried eight astronauts and the West German Spacelab D1 into space (STS-61A, 30 October-6 November 1985). Three months later (28 January 1986), Challenger was destroyed at the start of STS-51L, the Shuttle Program’s 25th mission.
In addition to seven astronauts – NASA’s first in-flight fatalities - Challenger took with it the second TDRS satellite, TDRS-B. The Shuttle would not fly again until September 1988 (STS-26, 29 September-3 October 1988). On that mission, OV-103 Discovery deployed TDRS-C. The TDRS system would not include the three satellites necessary for global coverage until TDRS-D reached orbit on board Discovery on mission STS-29 (13-18 March 1989).
Following the Challenger accident, NASA abandoned – though not without some resistance – the pretense that it operated a fleet of cargo planes. The space agency at one time had aimed for 60 Shuttle flights per year; between 1988 and 2003, the Shuttle Program managed about six per year. The most flights the Shuttle fleet accomplished in a year was nine in 1985.
Shuttle delays meant that JOP, renamed Galileo, missed its early January 1982 launch window. It was eventually rescheduled for May 1986, but the Challenger accident intervened. Galileo finally left Earth orbit on 18 October 1989 following deployment from OV-104 Atlantis during STS-34 (18-23 October 1989). Citing new concern for safety following Challenger, NASA canceled the powerful liquid-propellant Centaur rocket stage upon which Galileo had become dependent. The spacecraft had to rely instead on the less-powerful IUS, which meant that it could not be launched directly to Jupiter: instead, it had to perform gravity-assist flybys of Venus and Earth. Galileo did not reach the Jupiter system until December 1995.
LDEF had been scheduled to be retrieved in March 1985, less than a year after deployment, but flight delays and the Challenger accident postponed retrieval for nearly six years. On mission STS-32 (9-20 January 1990), astronauts on board Columbia retrieved LDEF, the orbit of which had decayed to 178 nm. LDEF was the largest object ever retrieved from space and returned to Earth.
During reentry at the end of mission STS-107 (16 January-1 February 2003), Columbia broke apart over northeast Texas. This precipitated cancellation of the Space Shuttle Program by President George W. Bush, who announced his decision on 14 January 2004. The program’s end was originally scheduled for 2010, immediately following the planned completion of the International Space Station. In the event, STS-135, the final Space Shuttle mission, took place in July 2011, three months after the 30th anniversary of STS-1. The Orbiter Atlantis lifted off on 8 July, with a four-person crew (the smallest since STS-6), docked with the International Space Station to deliver supplies and spares, and landed in Florida 13 days later.
STS Flight Assignment Baseline, JSC-13000-0, STS Utilization and Planning Office, NASA Johnson Space Center, 15 October 1977.
MSF Schedule Assessment of Major Space Shuttle Milestones (SENSITIVE), J. Yardley and M. Malkin, 31 January 1975.
I research and write about the history of space exploration and space technology with an emphasis on missions and programs planned but not flown (that is, the vast majority of them). Views expressed are my own. | http://www.wired.com/wiredscience/tag/challenger/ | 13 |
83 | Rational functions are an important and useful class of functions, but there are others. We actually get most useful functions by starting with two additional functions beyond the identity function, and allowing two more operations in addition to addition subtraction multiplication and division.
What additional starting functions?
The two are the exponential function, which we will write for the moment as exp(x), and the sine function, which is generally written as sin(x).
And what are these?
We will devote some time and effort to introducing and describing these two functions and their many wonderful properties very soon. For now, all we care about is that they exist, you can find them on spreadsheets and scientific calculators, and we can perform arithmetic operations (addition, subtraction, multiplication and division) on them. If you want just a hint, the sine function is the basic function of the study of angles, which is called trigonometry. The exponent function is defined in terms of derivatives. It is the function whose value at argument 0 is 1, that has derivative everywhere that is the same as itself. We have
This definition may make the function a bit mysterious to you at first, but you have to admit that it makes it easy to differentiate this function.
And what additional operations are there?
The two new operations that we want to use are substitution, and inversion.
And what are these?
If we have two functions, f and g, with values f(x) and g(x) at argument x, we can construct a new function, which we write as f(g), that is gotten by using the value of g at argument x as the argument of f. The value of f(g) at x, which we write as f(g(x)), is the value of f at argument given by the value of g at x; it is the value of f at argument g(x). We call this new function the substitution of g into f. We'll get to inversion next.
If you substitute a polynomial into a polynomial, you just get a polynomial, and if you substitute a rational function into a rational function, you still have a rational function. But if you substitute these things into exponentials and sines you get entirely new things (like exp(-cx2) ) which is the basic function of probability theory.
Just as utilizing copies of the exponential or sine functions presents no problem to a spreadsheet or scientific calculator, substitution presents no real problem. You can create g(A10) in B10, and then f(B10) in C(10) and you have created the substituted value f(g(A10)) in C10. You can, by repeating this procedure, construct the most horrible looking combination of substitutions and arithmetical operations imaginable, and even worse than you could imagine, with very little difficulty, and you can find their numerical derivatives as well.
Before we go on to the last operation, we note that there is a great property associated with the operation of substitution. Just as we have found formulae above for finding the derivative of a sum or product or ratio of functions whose derivatives we know, we have a neat formula for the derivative of a substitution function in terms of the the derivatives of its constituents. Actually it is about as simple a formula for this as could be.
The result is often called the chain rule:
The derivative f(g(x)) with respect to x at some argument z, like any other derivative, is the slope of the straight line tangent to this function, at argument z. This slope, like all slopes, is the ratio of the change in the given function to a change in its argument, in any interval very near argument z.
Suppose then, we make a very small change in the variable x, very near to x = z, a change that is sufficiently small that the linear approximation to g and to f(g) are extremely accurate within the interval of change. Let us call that change dx. This will cause a change in g(x) of g'(z)dx, (because the definition of g'(z) is the ratio of the change of g to the change of x for x very near to z.)
If g'(z) is 0, then g will not change and neither will f(g(x)), which depends on x only in that its argument g depends on x.
If g'(z) is not 0, we can define dg to be g'(z)dx, and use the fact that the change in f for arguments near g(z) is given by is evaluated for arguments of g near g(z).
If we put these two statements together, which we can do by substituting for g'(z)dx for dg in the expression here for df, we find that the change in f is given by the change in z multiplied by the product of the two derivatives and the change in x:
If we now divide both sides by dx, we obtain the famous "chain rule", which tells us how to compute the derivative of a function defined by substituting one function in another.
It follows from this remark that the chain rules reads
In words, this means that the derivative of the substituted function with values f(g(z)), with respect to the variable z is the product of the derivatives of the constituent functions f and g, taken at the relevant arguments: which are z itself for g and g(z) for f.
How about some examples?
We will give two examples, but you should work out at least a dozen for yourself.
1: Suppose we substitute the function g described by values g(x) = x2 +1
into the function f described by values f(x) = x3 -
substituted function f(g) has values f(g(x)) = (x2 + 1)3 -
Let us compute the derivative of this function. The derivative of f(s) with respect to s is 3s2, while the derivative of g(x) with respect to x is 2x.
If we set s = g(x) and take the product of these two we get:
You could multiply the cube here out and then differentiate to get the same answer, but that is much messier, and most people would make at least one mistake in doing it. You have a chance of getting such things right even the first time, if you do them by the chain rule. (Unfortunately, if you do, you will not get any practice debugging from it.)
Example 2: Find the derivative of the function whose values are .
This is the function obtained by substituting the function with values into the exponential function.
Now the derivative of the function with values is the function with values -x; (remember that the exponential function is its own derivative.)
On applying the chain rule we find:
7.1 Write an expression for the result of substituting g into
f to form f(g) for the following pairs of functions, and find expressions for
using the chain rule.
a. f defined by , g defined by .
b. f defined by f(x) = -x, g by g(x) = exp(x).
c. f defined by f(x) = exp(x), g by g(x) = -x.
7. 2 Check each of your results using the derivative applet.
7.3 a. Consider the function defined by the formula x4 - 2x + 3. Use the applet to plot it and see its derivative. Where is its minimum value, and what is it? What is its derivative at the minimum point? Estimate these things from the applet.
b. Find the maximum point for f and the value of f at
approximately for f defined by f(x) = x2exp(-x).
OK, where am I now?
At this point you have rules that enable you to differentiate all functions
that you can make up using arithmetic operations and substitutions starting
with the identity function (f(x) = x) or with the mysterious exponential function,
f(x) = exp(x).
In the next section we will extend things so you can start with the sine function, f = sin x as well and differentiate anything you can create. Finally we will extend the rules to differentiating inverse functions as well. | http://www-math.mit.edu/~djk/calculus_beginners/chapter06/section01.html | 13 |
149 | |An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case. The black bar is one angstrom (10−10 m or 100 pm).|
The atom is a basic unit of matter that consists of a dense central nucleus surrounded by a cloud of negatively charged electrons. The atomic nucleus contains a mix of positively charged protons and electrically neutral neutrons (except in the case of hydrogen-1, which is the only stable nuclide with no neutrons). The electrons of an atom are bound to the nucleus by the electromagnetic force. Likewise, a group of atoms can remain bound to each other by chemical bonds based on the same force, forming a molecule. An atom containing an equal number of protons and electrons is electrically neutral, otherwise it is positively or negatively charged and is known as an ion. An atom is classified according to the number of protons and neutrons in its nucleus: the number of protons determines the chemical element, and the number of neutrons determines the isotope of the element.1
Chemical atoms, which in science now carry the simple name of "atom," are minuscule objects with diameters of a few tenths of a nanometer and tiny masses proportional to the volume implied by these dimensions. Atoms can only be observed individually using special instruments such as the scanning tunneling microscope. Over 99.94% of an atom's mass is concentrated in the nucleus,note 1 with protons and neutrons having roughly equal mass. Each element has at least one isotope with an unstable nucleus that can undergo radioactive decay. This can result in a transmutation that changes the number of protons or neutrons in a nucleus.2 Electrons that are bound to atoms possess a set of stable energy levels, or orbitals, and can undergo transitions between them by absorbing or emitting photons that match the energy differences between the levels. The electrons determine the chemical properties of an element, and strongly influence an atom's magnetic properties. The principles of quantum mechanics have been successfully used to model the observed properties of the atom.
The name atom comes from the Greek ἄτομος (atomos, "indivisible") from ἀ- (a-, "not") and τέμνω (temnō, "I cut"),3 which means uncuttable, or indivisible, something that cannot be divided further.4 The concept of an atom as an indivisible component of matter was first proposed by early Indian and Greek philosophers. In the 18th and 19th centuries, chemists provided a physical basis for this idea by showing that certain substances could not be further broken down by chemical methods, and they applied the ancient philosophical name of atom to the chemical entity. During the late 19th and early 20th centuries, physicists discovered subatomic components and structure inside the atom, thereby demonstrating that the chemical "atom" was divisible and that the name might not be appropriate.56 However, it was retained. This has led to some debate about whether the ancient philosophers, who intended to refer to fundamental individual objects with their concept of "atoms," were referring to modern chemical atoms, or something more like indivisible subatomic particles such as leptons or quarks, or even some more fundamental particle that has yet to be discovered.7
The concept that matter is composed of discrete units and cannot be divided into arbitrarily tiny quantities has been around for millennia, but these ideas were founded in abstract, philosophical reasoning rather than experimentation and empirical observation. The nature of atoms in philosophy varied considerably over time and between cultures and schools, and often had spiritual elements. Nevertheless, the basic idea of the atom was adopted by scientists thousands of years later because it elegantly explained new discoveries in the field of chemistry.8 The ancient name of "atom" from atomism had already been nearly universally used to describe chemical atoms by that time, and it was therefore retained as a term, long after chemical atoms were found to be divisible, and even after smaller, truly indivisible particles were identified.
References to the concept of atoms date back to ancient Greece and India. In India, the Ājīvika, Jain, and Cārvāka schools of atomism may date back to the 6th century BCE.9 The Nyaya and Vaisheshika schools later developed theories on how atoms combined into more complex objects.10 In the West, the references to atoms emerged in the 5th century BCE with Leucippus, whose student, Democritus, systematized his views. In approximately 450 BCE, Democritus coined the term átomos (Greek: ἄτομος), which means "uncuttable" or "the smallest indivisible particle of matter". Although the Indian and Greek concepts of the atom were based purely on philosophy, modern science has retained the name coined by Democritus.8
Corpuscularianism is the postulate, expounded in the 13th-century by the alchemist Pseudo-Geber (Geber),11 sometimes identified with Paul of Taranto, that all physical bodies possess an inner and outer layer of minute particles or corpuscles.12 Corpuscularianism is similar to the theory of atomism, except that where atoms were supposed to be indivisible, corpuscles could in principle be divided. In this manner, for example, it was theorized that mercury could penetrate into metals and modify their inner structure.13 Corpuscularianism stayed a dominant theory over the next several hundred years.
In 1661, natural philosopher Robert Boyle published The Sceptical Chymist in which he argued that matter was composed of various combinations of different "corpuscules" or atoms, rather than the classical elements of air, earth, fire and water.14 During the 1670s corpuscularianism was used by Isaac Newton in his development of the corpuscular theory of light.1215
Origin of scientific theory
Further progress in the understanding of atoms did not occur until the science of chemistry began to develop. In 1789, French nobleman and scientific researcher Antoine Lavoisier discovered the law of conservation of mass and defined an element as a basic substance that could not be further broken down by the methods of chemistry.16
In 1805, English instructor and natural philosopher John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers (the law of multiple proportions) and why certain gases dissolved better in water than others. He proposed that each element consists of atoms of a single, unique type, and that these atoms can join together to form chemical compounds.1718 Dalton is considered the originator of modern atomic theory.19
Dalton's atomic hypothesis did not specify the size of atoms. Common sense indicated they must be very small, but nobody knew how small. Therefore it was a major landmark when in 1865 Johann Josef Loschmidt measured the size of the molecules that make up air.
An additional line of reasoning in support of particle theory (and by extension atomic theory) began in 1827 when botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically—a phenomenon that became known as "Brownian motion". J. Desaulx suggested in 1877 that the phenomenon was caused by the thermal motion of water molecules, and in 1905 Albert Einstein produced the first mathematical analysis of the motion.202122 French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of atoms, thereby conclusively verifying Dalton's atomic theory.23
In 1869, building upon earlier discoveries by such scientists as Lavoisier, Dmitri Mendeleev published the first functional periodic table.24 The table itself is a visual representation of the periodic law, which states that certain chemical properties of elements repeat periodically when arranged by atomic number.25
Subcomponents and quantum theory
The physicist J. J. Thomson, through his work on cathode rays in 1897, discovered the electron, and concluded that they were a component of every atom. Thus he overturned the belief that atoms are the indivisible, ultimate particles of matter.26 Thomson postulated that the low mass, negatively charged electrons were distributed throughout the atom, possibly rotating in rings, with their charge balanced by the presence of a uniform sea of positive charge. This later became known as the plum pudding model.
In 1909, Hans Geiger and Ernest Marsden, under the direction of physicist Ernest Rutherford, bombarded a sheet of gold foil with alpha rays—by then known to be positively charged helium atoms—and discovered that a small percentage of these particles were deflected through much larger angles than was predicted using Thomson's proposal. Rutherford interpreted the gold foil experiment as suggesting that the positive charge of a heavy gold atom and most of its mass was concentrated in a nucleus at the center of the atom—the Rutherford model.27
While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table.28 The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J.J. Thomson created a technique for separating atom types through his work on ionized gases, which subsequently led to the discovery of stable isotopes.29
Meanwhile, in 1913, physicist Niels Bohr suggested that the electrons were confined into clearly defined, quantized orbits, and could jump between these, but could not freely spiral inward or outward in intermediate states.30 An electron must absorb or emit specific amounts of energy to transition between these fixed orbits. When the light from a heated material was passed through a prism, it produced a multi-colored spectrum. The appearance of fixed lines in this spectrum was successfully explained by these orbital transitions.31
Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius Van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today.32
Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons.33 As the chemical properties of the elements were known to largely repeat themselves according to the periodic law,34 in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus.35
The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of the atom. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split based on the direction of an atom's angular momentum, or spin. As this direction is random, the beam could be expected to spread into a line. Instead, the beam was split into two parts, depending on whether the atomic spin was oriented up or down.36
In 1924, Louis de Broglie proposed that all particles behave to an extent like waves. In 1926, Erwin Schrödinger used this idea to develop a mathematical model of the atom that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at the same time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1926. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed.3738
The development of the mass spectrometer allowed the exact mass of atoms to be measured. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule.39 The explanation for these different isotopes awaited the discovery of the neutron, a neutral-charged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus.40
Fission, high-energy physics and condensed matter
In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product.41 A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission.4243 In 1944, Hahn received the Nobel prize in chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized.44
In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies.45 Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions.46
Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. However, the hydrogen-1 atom has no neutrons and a positive hydrogen ion has no electrons.
The electron is by far the least massive of these particles at 9.11×10−31 kg, with a negative electrical charge and a size that is too small to be measured using available techniques.47 Protons have a positive charge and a mass 1,836 times that of the electron, at 1.6726×10−27 kg, although this can be reduced by changes to the energy binding the proton into an atom. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of electrons,48 or 1.6929×10−27 kg. Neutrons and protons have comparable dimensions—on the order of 2.5×10−15 m—although the 'surface' of these particles is not sharply defined.49
In the Standard Model of physics, electrons are truly elementary particles with no internal structure. However, both protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +2⁄3) and one down quark (with a charge of −1⁄3). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles.5051
The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces.5051
All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to 1.07 3√ fm, where A is the total number of nucleons.52 This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other.53
Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay.54
The neutron and the proton are different types of fermions. The Pauli exclusion principle is a quantum mechanical effect that prohibits identical fermions, such as multiple protons, from occupying the same quantum physical state at the same time. Thus every proton in the nucleus must occupy a different state, with its own energy level, and the same rule applies to all of the neutrons. This prohibition does not apply to a proton and neutron occupying the same quantum state.55
For atoms with low atomic numbers, a nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with roughly matching numbers of protons and neutrons are more stable against decay. However, with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus, which modifies this trend. Thus, there are no stable nuclei with equal proton and neutron numbers above atomic number Z = 20 (calcium); and as Z increases toward the heaviest nuclei, the ratio of neutrons per proton required for stability increases to about 1.5.55
The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3–10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus.56 Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element.5758
If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E = mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate.59
The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together.60 It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star.55
The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations.
Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured.61 Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form.62 Orbitals can have one or more ring or node structures, and they differ from each other in size, shape and orientation.63
Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines.62
The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom,64 compared to 2.23 million eV for splitting a deuterium nucleus.65 Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.66
By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form,67 also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single proton element hydrogen up to the 118-proton element ununoctium.68 All known isotopes of elements with atomic numbers greater than 82 are radioactive.6970
About 339 nuclides occur naturally on Earth,71 of which 254 (about 75%) have not been observed to decay, and are referred to as "stable isotopes". However, only 90 of these nuclides are stable to all decay, even in theory. Another 164 (bringing the total to 254) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 80 million years, and are long-lived enough to be present from the birth of the solar system. This collection of 288 nuclides are known as primordial nuclides. Finally, an additional 51 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or else as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).72note 2
For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.73page needed
Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 254 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects.73page needed
The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. The mass number is a simple whole number, and has units of "nucleons." An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons).
The actual mass of an atom at rest is often expressed using the unified atomic mass unit (u), which is also called a dalton (Da). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately 1.66×10−27 kg.74 Hydrogen-1, the lightest isotope of hydrogen and the atom with the lowest mass, has an atomic weight of 1.007825 u.75 The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the mass of the atomic mass unit. However, this number will not be an exact whole number except in the case of carbon-12 (see below)76 The heaviest stable atom is lead-208,69 with a mass of 207.9766521 u.77
As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about 6.022×1023). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 u, and so a mole of carbon-12 atoms weighs exactly 0.012 kg.74page needed
Shape and size
Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin.78 On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right).79 Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm.80
When subjected to external fields, like an electrical field, the shape of an atom may deviate from that of a sphere. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites.81 Significant ellipsoidal deformations have recently been shown to occur for sulfur ions in pyrite-type compounds.82
Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they can not be viewed using an optical microscope. However, individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width.83 A single drop of water contains about 2 sextillion (2×1021) atoms of oxygen, and twice the number of hydrogen atoms.84 A single carat diamond with a mass of 2×10−4 kg contains about 10 sextillion (1022) atoms of carbon.note 3 If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple.85
Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm.86
- Alpha decay is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number.
- Beta decay (and electron capture) are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The first is accompanied by the emission of an electron and an antineutrino, while the second causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. An analog of positron beta decay in nuclei that are proton-rich is electron capture, a process even more common than posirton emission since it requires less energy. In this thpe of decay an electron is absorbed by the nucleus, rather than a positron emitted. A neutrino is still emitted in this process, and a proton again changes to a neutron.
- Gamma decay results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. This can occur following the emission of an alpha or a beta particle fromy radioactive decay.
Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle, or result (through internal conversion) in production of high-speed electrons that are not beta rays, and high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth.86
Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin.89
The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field. However, the most dominant contribution comes from spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons.90
In ferromagnetic elements such as iron, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field.9091
The nucleus of an atom can also have a net spin. Normally these nuclei are aligned in random directions because of thermal equilibrium. However, for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging.9293
The potential energy of an electron in an atom is negative, its dependence of its position reaches the minimum (the most absolute value) inside the nucleus, and vanishes when the distance from the nucleus goes to infinity, roughly in an inverse proportion to the distance. In the quantum-mechanical model, a bound electron can only occupy a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state,clarification needed while an electron transition to a higher level results in an excited state.94 The electron's energy raises when n increases because the (average) distance to the nucleus increases. Dependence of the energy on ℓ is caused not by electrostatic potential of the nucleus, but by interaction between electrons.
For an electron to transition between two different states, it mustcitation needed absorb or emit a photon at an energy matching the difference in the potential energy of those levels. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum.95 Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.96
When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined.97
Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron.98 When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines.99 The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect.100
If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band.101
Valence and bonding behavior
The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells.102 For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. However, many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds.103
The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases.104105
Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. 106 Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond.107 Gaseous allotropes exist as well, such as dioxygen and ozone.
At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale.108109 This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior.110
The scanning tunneling microscope is a device for viewing surfaces at the atomic level. It uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would normally be insurmountable. Electrons tunnel through the vacuum between two planar metal electrodes, on each of which is an adsorbed atom, providing a tunneling-current density that can be measured. Scanning one atom (taken as the tip) as it moves past the other (the sample) permits plotting of tip displacement versus lateral separation for a constant current. The calculation shows the extent to which scanning-tunneling-microscope images of an individual atom are visible. It confirms that for low bias, the microscope images the space-averaged dimensions of the electron orbitals across closely packed energy levels—the Fermi level local density of states.111112
An atom can be ionized by removing one of its electrons. The electric charge causes the trajectory of an atom to bend when it passes through a magnetic field. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis.113
A more area-selective method is electron energy loss spectroscopy, which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry.114
Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element.115 Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth.116
Origin and current state
Atoms form about 4% of the total energy density of the observable Universe, with an average density of about 0.25 atoms/m3.117 Within a galaxy such as the Milky Way, atoms have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3.118 The Sun is believed to be inside the Local Bubble, a region of highly ionized gas, so the density in the solar neighborhood is only about 103 atoms/m3.119 Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's atoms are concentrated inside stars and the total mass of atoms forms about 10% of the mass of the galaxy.120 (The remainder of the mass is an unknown dark matter.)121
Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron.122123124
Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma – a gas of positively-charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei.125
Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron;126 see stellar nucleosynthesis for details.
Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation.127 This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected.
Elements heavier than iron were produced in supernovae through the r-process and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei.128 Elements such as lead formed largely through the radioactive decay of heavier elements.129
Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating.130131 Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay.132
There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere.133 Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions.134135 Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth.136137 Transuranic elements have radioactive lifetimes shorter than the current age of the Earth138 and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust.139 Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore.140
The Earth contains approximately 1.33×1050 atoms.141 Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals.142143 This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter.144
Rare and theoretical forms
While isotopes with atomic numbers higher than lead (82) are known to be radioactive, an "island of stability" has been proposed for some elements with atomic numbers above 103. These superheavy elements may have a nucleus that is relatively stable against radioactive decay.145 The most likely candidate for a stable superheavy atom, unbihexium, has 126 protons and 184 neutrons.146
Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. (The first causes of this imbalance are not yet fully understood, although the baryogenesis theories may offer an explanation.) As a result, no antimatter atoms have been discovered in nature.147148 However, in 1996, antihydrogen, the antimatter counterpart of hydrogen, was synthesized at the CERN laboratory in Geneva.149150
Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test the fundamental predictions of physics.151152153
- In the case of hydrogen-1, with a single electron and nucleon, the proton is , or 99.946% of the total atomic mass. All other nuclides (isotopes of hydrogen and all other elements) have more nucleons than electrons, so the fraction of mass taken by the nucleus is significantly closer to 100% for all of these types of atoms, than for hydrogen-1.
- For more recent updates see Interactive Chart of Nuclides (Brookhaven National Laboratory).
- A carat is 200 milligrams. By definition, carbon-12 has 0.012 kg per mole. The Avogadro constant defines 6×1023 atoms per mole.
- Leigh, G. J., ed. (1990). International Union of Pure and Applied Chemistry, Commission on the Nomenclature of Inorganic Chemistry, Nomenclature of Organic Chemistry – Recommendations 1990. Oxford: Blackwell Scientific Publications. p. 35. ISBN 0-08-022369-9. "An atom is the smallest unit quantity of an element that is capable of existence whether alone or in chemical combination with other atoms of the same or other elements."
- "Radioactive Decays". Stanford Linear Accelerator Center. 15 June 2009. Archived from the original on 7 June 2009. Retrieved 2009-07-04.
- Liddell, Henry George; Scott, Robert. "A Greek-English Lexicon". Perseus Digital Library.
- Liddell, Henry George; Scott, Robert. "ἄτομος". A Greek-English Lexicon. Perseus Digital Library. Retrieved 2010-06-21.
- Haubold, Hans; Mathai, A.M. (1998). "Microcosmos: From Leucippus to Yukawa". Structure of the Universe. Retrieved 2008-01-17.
- Harrison 2003, pp. 123–139.
- Leon M. Lederman and Dick Teresi (1993, reprint in 2006). The God Particle: If the Universe is the Answer, What is the Question?. Boston: Houghton Mifflin Company. ISBN 0-618-71168-6. Lederman provides an excellent discussion of this point, and this debate.
- Ponomarev 1993, pp. 14–15.
- McEvilley 2002, p. 317.
- King 1999, pp. 105–107.
- Moran 2005, p. 146.
- Levere 2001, p. 7.
- Pratt, Vernon (September 28, 2007). "The Mechanical Philosophy". Reason, nature and the human being in the West. Retrieved 2009-06-28.
- Siegfried 2002, pp. 42–55.
- Kemerling, Garth (August 8, 2002). "Corpuscularianism". Philosophical Dictionary. Retrieved 2009-06-17.
- "Lavoisier's Elements of Chemistry". Elements and Atoms. Le Moyne College, Department of Chemistry. Retrieved 2007-12-18.
- Wurtz 1881, pp. 1–2.
- Dalton 1808.
- Roscoe 1895, pp. 129.
- Einstein, Albert (1905). "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" (PDF). Annalen der Physik (in German) 322 (8): 549–560. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806. Retrieved 2007-02-04.
- Mazo 2002, pp. 1–7.
- Lee, Y.K.; Hoon, K. (1995). "Brownian Motion". Imperial College. Archived from the original on 18 December 2007. Retrieved 2007-12-18.
- Patterson, G. (2007). "Jean Perrin and the triumph of the atomic doctrine". Endeavour 31 (2): 50–53. doi:10.1016/j.endeavour.2007.05.003. PMID 17602746.
- "Periodic Table of the Elements". The International Union of Pure and Applied Chemistry. November 1, 2007. Archived from the original on 2010-04-25. Retrieved 2010-05-14.
- Scerri 2007, pp. 10–17.
- "J.J. Thomson". Nobel Foundation. 1906. Retrieved 2007-12-20.
- Rutherford, E. (1911). "The Scattering of α and β Particles by Matter and the Structure of the Atom". Philosophical Magazine 21 (125): 669–88. doi:10.1080/14786440508637080.
- "Frederick Soddy, The Nobel Prize in Chemistry 1921". Nobel Foundation. Retrieved 2008-01-18.
- Thomson, Joseph John (1913). "Rays of positive electricity". Proceedings of the Royal Society. A 89 (607): 1–20. Bibcode:1913RSPSA..89....1T. doi:10.1098/rspa.1913.0057.
- Stern, David P. (May 16, 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA/Goddard Space Flight Center. Retrieved 2007-12-20.
- Bohr, Niels (11 December 1922). "Niels Bohr, The Nobel Prize in Physics 1922, Nobel Lecture". Nobel Foundation. Retrieved 2008-02-16.
- Pais 1986, pp. 228–230.
- Lewis, Gilbert N. (1916). "The Atom and the Molecule". Journal of the American Chemical Society 38 (4): 762–786. doi:10.1021/ja02261a002.
- Scerri 2007, pp. 205–226.
- Langmuir, Irving (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society 41 (6): 868–934. doi:10.1021/ja02227a002.
- Scully, Marlan O.; Lamb, Willis E.; Barut, Asim (1987). "On the theory of the Stern-Gerlach apparatus". Foundations of Physics 17 (6): 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788.
- Brown, Kevin (2007). "The Hydrogen Atom". MathPages. Retrieved 2007-12-21.
- Harrison, David M. (2000). "The Development of Quantum Mechanics". University of Toronto. Archived from the original on 25 December 2007. Retrieved 2007-12-21.
- Aston, Francis W. (1920). "The constitution of atmospheric neon". Philosophical Magazine 39 (6): 449–55. doi:10.1080/14786440408636058.
- Chadwick, James (December 12, 1935). "Nobel Lecture: The Neutron and Its Properties". Nobel Foundation. Retrieved 2007-12-21.
- "Otto Hahn, Lise Meitner and Fritz Strassmann". Chemical Achievers: The Human Face of the Chemical Sciences. Chemical Heritage Foundation. Archived from the original on 24 October 2009. Retrieved 2009-09-15.
- Meitner, Lise; Frisch, Otto Robert (1939). "Disintegration of uranium by neutrons: a new type of nuclear reaction". Nature 143 (3615): 239. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0.
- Schroeder, M. "Lise Meitner – Zur 125. Wiederkehr Ihres Geburtstages" (in German). Retrieved 2009-06-04.
- Crawford, E.; Sime, Ruth Lewin; Walker, Mark (1997). "A Nobel tale of postwar injustice". Physics Today 50 (9): 26–32. Bibcode:1997PhT....50i..26C. doi:10.1063/1.881933.
- Kullander, Sven (August 28, 2001). "Accelerators and Nobel Laureates". Nobel Foundation. Retrieved 2008-01-31.
- "The Nobel Prize in Physics 1990". Nobel Foundation. October 17, 1990. Retrieved 2008-01-31.
- Demtröder 2002, pp. 39–42.
- Woan 2000, p. 8.
- MacGregor 1992, pp. 33–37.
- Particle Data Group (2002). "The Particle Adventure". Lawrence Berkeley Laboratory. Archived from the original on 4 January 2007. Retrieved 2007-01-03.
- Schombert, James (April 18, 2006). "Elementary Particles". University of Oregon. Retrieved 2007-01-03.
- Jevremovic 2005, p. 63.
- Pfeffer 2000, pp. 330–336.
- Wenner, Jennifer M. (October 10, 2007). "How Does Radioactive Decay Work?". Carleton College. Retrieved 2008-01-09.
- Raymond, David (April 7, 2006). "Nuclear Binding Energies". New Mexico Tech. Archived from the original on December 11, 2006. Retrieved 2007-01-03.
- Mihos, Chris (July 23, 2002). "Overcoming the Coulomb Barrier". Case Western Reserve University. Retrieved 2008-02-13.
- Staff (March 30, 2007). "ABC's of Nuclear Science". Lawrence Berkeley National Laboratory. Archived from the original on 5 December 2006. Retrieved 2007-01-03.
- Makhijani, Arjun; Saleska, Scott (March 2, 2001). "Basics of Nuclear Physics and Fission". Institute for Energy and Environmental Research. Archived from the original on 16 January 2007. Retrieved 2007-01-03.
- Shultis & Faw 2002, pp. 10–17.
- Fewell, M. P. (1995). "The atomic nuclide with the highest mean binding energy". American Journal of Physics 63 (7): 653–658. Bibcode:1995AmJPh..63..653F. doi:10.1119/1.17828.
- Mulliken, Robert S. (1967). "Spectroscopy, Molecular Orbitals, and Chemical Bonding". Science 157 (3784): 13–24. Bibcode:1967Sci...157...13M. doi:10.1126/science.157.3784.13. PMID 5338306.
- Brucat, Philip J. (2008). "The Quantum Atom". University of Florida. Archived from the original on 7 December 2006. Retrieved 2007-01-04.
- Manthey, David (2001). "Atomic Orbitals". Orbital Central. Archived from the original on 10 January 2008. Retrieved 2008-01-21.
- Herter, Terry (2006). "Lecture 8: The Hydrogen Atom". Cornell University. Retrieved 2008-02-14.
- Bell, R. E.; Elliott, L. G. (1950). "Gamma-Rays from the Reaction H1(n,γ)D2 and the Binding Energy of the Deuteron". Physical Review 79 (2): 282–285. Bibcode:1950PhRv...79..282B. doi:10.1103/PhysRev.79.282.
- Smirnov 2003, pp. 249–272.
- Matis, Howard S. (August 9, 2000). "The Isotopes of Hydrogen". Guide to the Nuclear Wall Chart. Lawrence Berkeley National Lab. Archived from the original on 18 December 2007. Retrieved 2007-12-21.
- Weiss, Rick (October 17, 2006). "Scientists Announce Creation of Atomic Element, the Heaviest Yet". Washington Post. Retrieved 2007-12-21.
- Sills 2003, pp. 131–134.
- Dumé, Belle (April 23, 2003). "Bismuth breaks half-life record for alpha decay". Physics World. Archived from the original on 14 December 2007. Retrieved 2007-12-21.
- Lindsay, Don (July 30, 2000). "Radioactives Missing From The Earth". Don Lindsay Archive. Archived from the original on 28 April 2007. Retrieved 2007-05-23.
- Tuli, Jagdish K. (April 2005). "Nuclear Wallet Cards". National Nuclear Data Center, Brookhaven National Laboratory. Retrieved 2011-04-16.
- CRC Handbook (2002).
- Mills et al. (1993).
- Chieh, Chung (January 22, 2001). "Nuclide Stability". University of Waterloo. Retrieved 2007-01-04.
- "Atomic Weights and Isotopic Compositions for All Elements". National Institute of Standards and Technology. Archived from the original on 31 December 2006. Retrieved 2007-01-04.
- Audi, G.; Wapstra, A.H.; Thibault, C. (2003). "The Ame2003 atomic mass evaluation (II)". Nuclear Physics A 729 (1): 337–676. Bibcode:2003NuPhA.729..337A. doi:10.1016/j.nuclphysa.2003.11.003. Retrieved 2008-02-07.
- Shannon, R. D. (1976). "Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides". Acta Crystallographica A 32 (5): 751. Bibcode:1976AcCrA..32..751S. doi:10.1107/S0567739476001551.
- Dong, Judy (1998). "Diameter of an Atom". The Physics Factbook. Archived from the original on 4 November 2007. Retrieved 2007-11-19.
- Zumdahl (2002).
- Bethe, H. (1929). "Termaufspaltung in Kristallen". Annalen der Physik, 5. Folge 3: 133.
- Birkholz, M.; Rudert, R. (2008). "Interatomic distances in pyrite-structure disulfides – a case for ellipsoidal modeling of sulfur ions". Physica status solidi b 245 (9): 1858. Bibcode:2008PSSBR.245.1858B. doi:10.1002/pssb.200879532.
- Staff (2007). "Small Miracles: Harnessing nanotechnology". Oregon State University. Retrieved 2007-01-07.—describes the width of a human hair as 105 nm and 10 carbon atoms as spanning 1 nm.
- Padilla et al. (2002:32)—"There are 2,000,000,000,000,000,000,000 (that's 2 sextillion) atoms of oxygen in one drop of water—and twice as many atoms of hydrogen."
- Feynman 1995, p. 5.
- "Radioactivity". Splung.com. Archived from the original on 4 December 2007. Retrieved 2007-12-19.
- L'Annunziata 2003, pp. 3–56.
- Firestone, Richard B. (May 22, 2000). "Radioactive Decay Modes". Berkeley Laboratory. Retrieved 2007-01-07.
- Hornak, J. P. (2006). "Chapter 3: Spin Physics". The Basics of NMR. Rochester Institute of Technology. Archived from the original on 3 February 2007. Retrieved 2007-01-07.
- Schroeder, Paul A. (February 25, 2000). "Magnetic Properties". University of Georgia. Archived from the original on 2007-04-29. Retrieved 2007-01-07.
- Goebel, Greg (September 1, 2007). "[4.3] Magnetic Properties of the Atom". Elementary Quantum Physics. In The Public Domain website. Retrieved 2007-01-07.
- Yarris, Lynn (Spring 1997). "Talking Pictures". Berkeley Lab Research Review. Archived from the original on 13 January 2008. Retrieved 2008-01-09.
- Liang & Haacke 1999, pp. 412–426.
- Zeghbroeck, Bart J. Van (1998). "Energy levels". Shippensburg University. Archived from the original on January 15, 2005. Retrieved 2007-12-23.
- Fowles (1989:227–233).
- Martin, W. C.; Wiese, W. L. (May 2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Archived from the original on 8 February 2007. Retrieved 2007-01-08.
- "Atomic Emission Spectra — Origin of Spectral Lines". Avogadro Web Site. Retrieved 2006-08-10.
- Fitzpatrick, Richard (February 16, 2007). "Fine structure". University of Texas at Austin. Retrieved 2008-02-14.
- Weiss, Michael (2001). "The Zeeman Effect". University of California-Riverside. Archived from the original on 2 February 2008. Retrieved 2008-02-06.
- Beyer 2003, pp. 232–236.
- Watkins, Thayer. "Coherence in Stimulated Emission". San José State University. Archived from the original on 12 January 2008. Retrieved 2007-12-23.
- Reusch, William (July 16, 2007). "Virtual Textbook of Organic Chemistry". Michigan State University. Retrieved 2008-01-11.
- "Covalent bonding – Single bonds". chemguide. 2000.
- Husted, Robert; et al (December 11, 2003). "Periodic Table of the Elements". Los Alamos National Laboratory. Archived from the original on 10 January 2008. Retrieved 2008-01-11.
- Baum, Rudy (2003). "It's Elemental: The Periodic Table". Chemical & Engineering News. Retrieved 2008-01-11.
- Goodstein 2002, pp. 436–438.
- Brazhkin, Vadim V. (2006). "Metastable phases, phase transformations, and phase diagrams in physics and chemistry". Physics-Uspekhi 49 (7): 719–24. Bibcode:2006PhyU...49..719B. doi:10.1070/PU2006v049n07ABEH006013.
- Myers 2003, p. 85.
- Staff (October 9, 2001). "Bose-Einstein Condensate: A New Form of Matter". National Institute of Standards and Technology. Archived from the original on 3 January 2008. Retrieved 2008-01-16.
- Colton, Imogen; Fyffe, Jeanette (February 3, 1999). "Super Atoms from Bose-Einstein Condensation". The University of Melbourne. Archived from the original on August 29, 2007. Retrieved 2008-02-06.
- Jacox, Marilyn; Gadzuk, J. William (November 1997). "Scanning Tunneling Microscope". National Institute of Standards and Technology. Archived from the original on 7 January 2008. Retrieved 2008-01-11.
- "The Nobel Prize in Physics 1986". The Nobel Foundation. Retrieved 2008-01-11.—in particular, see the Nobel lecture by G. Binnig and H. Rohrer.
- Jakubowski, N.; Moens, Luc; Vanhaecke, Frank (1998). "Sector field mass spectrometers in ICP-MS". Spectrochimica Acta Part B: Atomic Spectroscopy 53 (13): 1739–63. Bibcode:1998AcSpe..53.1739J. doi:10.1016/S0584-8547(98)00222-5.
- Müller, Erwin W.; Panitz, John A.; McLane, S. Brooks (1968). "The Atom-Probe Field Ion Microscope". Review of Scientific Instruments 39 (1): 83–86. Bibcode:1968RScI...39...83M. doi:10.1063/1.1683116.
- Lochner, Jim; Gibb, Meredith; Newman, Phil (April 30, 2007). "What Do Spectra Tell Us?". NASA/Goddard Space Flight Center. Archived from the original on 16 January 2008. Retrieved 2008-01-03.
- Winter, Mark (2007). "Helium". WebElements. Archived from the original on 30 December 2007. Retrieved 2008-01-03.
- Hinshaw, Gary (February 10, 2006). "What is the Universe Made Of?". NASA/WMAP. Archived from the original on 31 December 2007. Retrieved 2008-01-07.
- Choppin, Liljenzin & Rydberg 2001, p. 441.
- Davidsen, Arthur F. (1993). "Far-Ultraviolet Astronomy on the Astro-1 Space Shuttle Mission". Science 259 (5093): 327–34. Bibcode:1993Sci...259..327D. doi:10.1126/science.259.5093.327. PMID 17832344.
- Lequeux 2005, p. 4.
- Smith, Nigel (January 6, 2000). "The search for dark matter". Physics World. Archived from the original on 16 February 2008. Retrieved 2008-02-14.
- Croswell, Ken (1991). "Boron, bumps and the Big Bang: Was matter spread evenly when the Universe began? Perhaps not; the clues lie in the creation of the lighter elements such as boron and beryllium". New Scientist (1794): 42. Archived from the original on 7 February 2008. Retrieved 2008-01-14.
- Copi, Craig J.; Schramm, DN; Turner, MS (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science 267 (5195): 192–99. arXiv:astro-ph/9407006. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624.
- Hinshaw, Gary (December 15, 2005). "Tests of the Big Bang: The Light Elements". NASA/WMAP. Archived from the original on 17 January 2008. Retrieved 2008-01-13.
- Abbott, Brian (May 30, 2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Retrieved 2008-01-13.
- Hoyle, F. (1946). "The synthesis of the elements from hydrogen". Monthly Notices of the Royal Astronomical Society 106: 343–83. Bibcode:1946MNRAS.106..343H.
- Knauth, D. C.; Knauth, D. C.; Lambert, David L.; Crane, P. (2000). "Newly synthesized lithium in the interstellar medium". Nature 405 (6787): 656–58. doi:10.1038/35015028. PMID 10864316.
- Mashnik, Stepan G. (2000). "On Solar System and Cosmic Rays Nucleosynthesis and Spallation Processes". arXiv:astro-ph/0008382 astro-ph.
- Kansas Geological Survey (May 4, 2005). "Age of the Earth". University of Kansas. Retrieved 2008-01-14.
- Manuel 2001, pp. 407–430, 511–519.
- Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications 190 (1): 205–21. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Retrieved 2008-01-14.
- Anderson, Don L.; Foulger, G. R.; Meibom, Anders (September 2, 2006). "Helium: Fundamental models". MantlePlumes.org. Archived from the original on 8 February 2007. Retrieved 2007-01-14.
- Pennicott, Katie (May 10, 2001). "Carbon clock could show the wrong time". PhysicsWeb. Archived from the original on 15 December 2007. Retrieved 2008-01-14.
- Yarris, Lynn (July 27, 2001). "New Superheavy Elements 118 and 116 Discovered at Berkeley Lab". Berkeley Lab. Archived from the original on 9 January 2008. Retrieved 2008-01-14.
- Diamond, H; et al. (1960). "Heavy Isotope Abundances in Mike Thermonuclear Device". Physical Review 119 (6): 2000–04. Bibcode:1960PhRv..119.2000D. doi:10.1103/PhysRev.119.2000.
- Poston Sr., John W. (March 23, 1998). "Do transuranic elements such as plutonium ever occur naturally?". Scientific American. Retrieved 2008-01-15.
- Keller, C. (1973). "Natural occurrence of lanthanides, actinides, and superheavy elements". Chemiker Zeitung 97 (10): 522–30. OSTI 4353086.
- Zaider & Rossi 2001, p. 17.
- Manuel 2001, pp. 407–430,511–519.
- "Oklo Fossil Reactors". Curtin University of Technology. Archived from the original on 18 December 2007. Retrieved 2008-01-15.
- Weisenberger, Drew. "How many atoms are there in the world?". Jefferson Lab. Retrieved 2008-01-16.
- Pidwirny, Michael. "Fundamentals of Physical Geography". University of British Columbia Okanagan. Archived from the original on 21 January 2008. Retrieved 2008-01-16.
- Anderson, Don L. (2002). "The inner inner core of Earth". Proceedings of the National Academy of Sciences 99 (22): 13966–68. Bibcode:2002PNAS...9913966A. doi:10.1073/pnas.232565899. PMC 137819. PMID 12391308.
- Pauling 1960, pp. 5–10.
- Anonymous (October 2, 2001). "Second postcard from the island of stability". CERN Courier. Archived from the original on 3 February 2008. Retrieved 2008-01-14.
- Jacoby, Mitch (2006). "As-yet-unsynthesized superheavy atom should form a stable diatomic molecule with fluorine". Chemical & Engineering News 84 (10): 19. doi:10.1021/cen-v084n010.p019a.
- Koppes, Steve (March 1, 1999). "Fermilab Physicists Find New Matter-Antimatter Asymmetry". University of Chicago. Retrieved 2008-01-14.
- Cromie, William J. (August 16, 2001). "A lifetime of trillionths of a second: Scientists explore antimatter". Harvard University Gazette. Retrieved 2008-01-14.
- Hijmans, Tom W. (2002). "Particle physics: Cold antihydrogen". Nature 419 (6906): 439–40. doi:10.1038/419439a. PMID 12368837.
- Staff (October 30, 2002). "Researchers 'look inside' antimatter". BBC News. Retrieved 2008-01-14.
- Barrett, Roger (1990). "The Strange World of the Exotic Atom". New Scientist (1728): 77–115. Archived from the original on 21 December 2007. Retrieved 2008-01-04.
- Indelicato, Paul (2004). "Exotic Atoms". Physica Scripta T112 (1): 20–26. arXiv:physics/0409058. Bibcode:2004PhST..112...20I. doi:10.1238/Physica.Topical.112a00020.
- Ripin, Barrett H. (July 1998). "Recent Experiments on Exotic Atoms". American Physical Society. Retrieved 2008-02-15.
- L'Annunziata, Michael F. (2003). Handbook of Radioactivity Analysis. Academic Press. ISBN 0-12-436603-1. OCLC 16212955.
- Beyer, H. F.; Shevelko, V. P. (2003). Introduction to the Physics of Highly Charged Ions. CRC Press. ISBN 0-7503-0481-2. OCLC 47150433.
- Choppin, Gregory R.; Liljenzin, Jan-Olov; Rydberg, Jan (2001). Radiochemistry and Nuclear Chemistry. Elsevier. ISBN 0-7506-7463-6. OCLC 162592180.
- Dalton, J. (1808). A New System of Chemical Philosophy, Part 1. London and Manchester: S. Russell.
- Demtröder, Wolfgang (2002). Atoms, Molecules and Photons: An Introduction to Atomic- Molecular- and Quantum Physics (1st ed.). Springer. ISBN 3-540-20631-0. OCLC 181435713.
- Feynman, Richard (1995). Six Easy Pieces. The Penguin Group. ISBN 978-0-14-027666-4. OCLC 40499574.
- Fowles, Grant R. (1989). Introduction to Modern Optics. Courier Dover Publications. ISBN 0-486-65957-7. OCLC 18834711.
- Gangopadhyaya, Mrinalkanti (1981). Indian Atomism: History and Sources. Atlantic Highlands, New Jersey: Humanities Press. ISBN 0-391-02177-X. OCLC 10916778.
- Goodstein, David L. (2002). States of Matter. Courier Dover Publications. ISBN 0-13-843557-X.
- Harrison, Edward Robert (2003). Masks of the Universe: Changing Ideas on the Nature of the Cosmos. Cambridge University Press. ISBN 0-521-77351-2. OCLC 50441595.
- Iannone, A. Pablo (2001). Dictionary of World Philosophy. Routledge. ISBN 0-415-17995-5. OCLC 44541769.
- Jevremovic, Tatjana (2005). Nuclear Principles in Engineering. Springer. ISBN 0-387-23284-2. OCLC 228384008.
- King, Richard (1999). Indian philosophy: an introduction to Hindu and Buddhist thought. Edinburgh University Press. ISBN 0-7486-0954-7.
- Lequeux, James (2005). The Interstellar Medium. Springer. ISBN 3-540-21326-0. OCLC 133157789.
- Levere, Trevor, H. (2001). Transforming Matter – A History of Chemistry for Alchemy to the Buckyball. The Johns Hopkins University Press. ISBN 0-8018-6610-3.
- Liang, Z.-P.; Haacke, E. M. (1999). Webster, J. G., ed. Encyclopedia of Electrical and Electronics Engineering: Magnetic Resonance Imaging (PDF). vol. 2. John Wiley & Sons. pp. 412–26. ISBN 0-471-13946-7. Retrieved 2008-01-09.
- McEvilley, Thomas (2002). The shape of ancient thought: comparative studies in Greek and Indian philosophies. Allworth Press. ISBN 1-58115-203-5.
- MacGregor, Malcolm H. (1992). The Enigmatic Electron. Oxford University Press. ISBN 0-19-521833-7. OCLC 223372888.
- Manuel, Oliver (2001). Origin of Elements in the Solar System: Implications of Post-1957 Observations. Springer. ISBN 0-306-46562-0. OCLC 228374906.
- Mazo, Robert M. (2002). Brownian Motion: Fluctuations, Dynamics, and Applications. Oxford University Press. ISBN 0-19-851567-7. OCLC 48753074.
- Mills, Ian; Cvitaš, Tomislav; Homann, Klaus; Kallay, Nikola; Kuchitsu, Kozo (1993). Quantities, Units and Symbols in Physical Chemistry (2nd ed.). Oxford: International Union of Pure and Applied Chemistry, Commission on Physiochemical Symbols Terminology and Units, Blackwell Scientific Publications. ISBN 0-632-03583-8. OCLC 27011505.
- Moran, Bruce T. (2005). Distilling Knowledge: Alchemy, Chemistry, and the Scientific Revolution. Harvard University Press. ISBN 0-674-01495-2.
- Myers, Richard (2003). The Basics of Chemistry. Greenwood Press. ISBN 0-313-31664-3. OCLC 50164580.
- Padilla, Michael J.; Miaoulis, Ioannis; Cyr, Martha (2002). Prentice Hall Science Explorer: Chemical Building Blocks. Upper Saddle River, New Jersey USA: Prentice-Hall, Inc. ISBN 0-13-054091-9. OCLC 47925884.
- Pais, Abraham (1986). Inward Bound: Of Matter and Forces in the Physical World. New York: Oxford University Press. ISBN 0-19-851971-0.
- Pauling, Linus (1960). The Nature of the Chemical Bond. Cornell University Press. ISBN 0-8014-0333-2. OCLC 17518275.
- Pfeffer, Jeremy I.; Nir, Shlomo (2000). Modern Physics: An Introductory Text. Imperial College Press. ISBN 1-86094-250-4. OCLC 45900880.
- Ponomarev, Leonid Ivanovich (1993). The Quantum Dice. CRC Press. ISBN 0-7503-0251-8. OCLC 26853108.
- Roscoe, Henry Enfield (1895). John Dalton and the Rise of Modern Chemistry. Century science series. New York: Macmillan. Retrieved 2011-04-03.
- Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press US. ISBN 0-19-530573-6.
- Shultis, J. Kenneth; Faw, Richard E. (2002). Fundamentals of Nuclear Science and Engineering. CRC Press. ISBN 0-8247-0834-2. OCLC 123346507.
- Siegfried, Robert (2002). From Elements to Atoms: A History of Chemical Composition. DIANE. ISBN 0-87169-924-9. OCLC 186607849.
- Sills, Alan D. (2003). Earth Science the Easy Way. Barron's Educational Series. ISBN 0-7641-2146-4. OCLC 51543743.
- Smirnov, Boris M. (2003). Physics of Atoms and Ions. Springer. ISBN 0-387-95550-X.
- Teresi, Dick (2003). Lost Discoveries: The Ancient Roots of Modern Science. Simon & Schuster. pp. 213–214. ISBN 0-7432-4379-X.
- Various (2002). Lide, David R., ed. Handbook of Chemistry & Physics (88th ed.). CRC. ISBN 0-8493-0486-5. OCLC 179976746. Archived from the original on 23 May 2008. Retrieved 2008-05-23.
- Woan, Graham (2000). The Cambridge Handbook of Physics. Cambridge University Press. ISBN 0-521-57507-9. OCLC 224032426.
- Wurtz, Charles Adolphe (1881). The Atomic Theory. New York: D. Appleton and company. ISBN 0-559-43636-X.
- Zaider, Marco; Rossi, Harald H. (2001). Radiation Science for Physicians and Public Health Workers. Springer. ISBN 0-306-46403-9. OCLC 44110319.
- Zumdahl, Steven S. (2002). Introductory Chemistry: A Foundation (5th ed.). Houghton Mifflin. ISBN 0-618-34342-3. OCLC 173081482. Archived from the original on 4 March 2008. Retrieved 2008-02-05.
|Find more about Atom at Wikipedia's sister projects|
|Definitions and translations from Wiktionary|
|Media from Commons|
|Learning resources from Wikiversity|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
- Francis, Eden (2002). "Atomic Size". Clackamas Community College. Archived from the original on 4 February 2007. Retrieved 2007-01-09.
- Freudenrich, Craig C. "How Atoms Work". How Stuff Works. Archived from the original on 8 January 2007. Retrieved 2007-01-09.
- "The Atom". Free High School Science Texts: Physics. Wikibooks. Retrieved 2010-07-10.
- Anonymous (2007). "The atom". Science aid+. Retrieved 2010-07-10.—a guide to the atom for teens.
- Anonymous (2006-01-03). "Atoms and Atomic Structure". BBC. Archived from the original on 2 January 2007. Retrieved 2007-01-11.
- Various (2006-01-03). "Physics 2000, Table of Contents". University of Colorado. Archived from the original on 14 January 2008. Retrieved 2008-01-11.
- Various (2006-02-03). "What does an atom look like?". University of Karlsruhe. Retrieved 2008-05-12. | http://www.bioscience.ws/encyclopedia/index.php?title=Atom | 13 |
60 | Temperature and light are important abiotic stimuli that provide plants with diurnal and seasonal cues which enable them to adapt to environmental change. The autumn—winter decline in temperature and light that occurs in temperate regions act as cues enabling plants to anticipate the change in season and consequently prepare for the arrival of freezing temperatures by inducing or enhancing cold stress tolerance mechanisms. Wheat and related temperate cereal species, which are able to grow under widely different climatic conditions (Dubcovsky and Dvorak, 2007), show broad genetic variability with respect to the capacity to withstand chilling and freezing conditions (Fowler and Gusta, 1979; Monroy et al., 2007)—plant species, such as rice, maize and tomato, are damaged by chilling temperatures and have no capacity to withstand freezing. Among the temperate cereals (e.g., barley, rye and wheat), there are both winter-hardy and winter-sensitive varieties. Winter-hardy cereals are able to withstand quite extreme subzero temperatures, while tender varieties are unable to withstand such conditions. However, the capacity to withstand subzero temperatures is not constitutive and even hardy plants require a period of exposure to low, non-freezing temperatures to acquire freezing tolerance. This process is referred to as cold acclimation. Non-acclimated wheat of the cultivar Norstar, for example, is killed at freezing temperatures of about −5 °C, while after cold acclimation plants of the same cultivar can survive temperatures as low as −20 °C (Jaglo et al., 2001).
Although both cold acclimation and vernalization are responses to low temperature, the duration of cold exposure required to initiate these responses is quite distinct. A rapid induction of cold-protective proteins is essential for surviving the sometimes sudden declines in temperature that may occur as winter approaches, and only 1 or 2 days of low, non-freezing temperatures are usually sufficient to bring about cold acclimation (Sung and Amasino, 2005). This capacity is rapidly lost, however, on a return of warmer conditions. On the other hand, given that temperature often fluctuates in the autumn, it is vitally important that short-pronounced cold spells followed by a return of warmer temperatures are not mistaken for the end of winter. Thus, plants require an extended period of cold before they become fully vernalized and competent to flower. What is more, plants retain a ‘memory’ of this extended exposure to cold and remain committed to flowering as temperatures rise in the spring (Sung and Amasino, 2006).
Most temperate cereals, be they winter or spring varieties, exhibit some degree of chilling tolerance. There is some debate about whether this is a constitutive characteristic or whether it is in part, or completely, induced upon exposure to cold (Jan et al., 2009). Cold acclimation and the acquisition of freezing tolerance, on the other hand, require the orchestration of many different, seemingly disparate physiological and biochemical changes (Steponkus, 1984; Thomashow, 1999; Ouellet et al., 2001). These changes are, at least in part, mediated through the differential expression of many genes (Guy et al., 1985; Thomashow, 1999; Monroy et al., 2007; Kosova et al., 2008; Kosmala et al., 2009). These genes are thought to be induced either by cold per se or by the relative state of dehydration that is brought about by cold stress (Griffith and Yaish, 2004). Many of these cold-regulated genes have been identified by transcriptome analysis. In Arabidopsis, for example, several hundred transcripts have been reported to respond to cold (Chen et al., 2002; Seki et al., 2002; Provart et al., 2003; Vogel et al., 2005). Similarly, in the temperate grasses a large numbers of genes have been shown to be cold responsive [(Zhang et al., 2009)—perennial ryegrass, (Svensson et al., 2006)—barley]. In wheat, Monroy et al. (2007) identified over 450 genes that were regulated by cold. Although these genes have been identified on the basis of their response to a cold stimulus, in many cases their specific function has not been discovered and their role in cold acclimation, if any, remains unknown (Tsuda et al., 2000). However, there are a good number of cold-regulated genes that have been assigned specific functions either as transcription factors that act up-stream in cold acclimation or as effector molecules that act to counter the potential damaging effects of cold stress.
In this article, we provide a general overview of the present understanding of cold acclimation and the acquisition of freezing tolerance. In addition to this, we provide information obtained from two separate microarray-based studies of wheat carried out in our laboratory. In these experiments, we explored the effect of low temperature on transcriptome reprogramming in three wheat cultivars: two winter varieties (Harnesk and Solstice) and a spring variety (Paragon). In one experiment, referred to hereafter as the ‘cold-shock experiment’, plants were rapidly transferred from 16 to 4 °C and held for 2 days—2 days of exposure was chosen because it has been reported that many COR genes accumulate maximally within this period (Ganeshan et al., 2008). In a second experiment, designed to mimic a natural autumn to winter transition, plants were exposed to a gradual decline in temperature and light (quality and day length) over several weeks (hereafter, this is referred to as the ‘cold acclimation experiment’). See Winfield et al. (2009) for details of experimental design and procedure.
Global changes in transcripts upon exposure to cold
It has long been known that many changes in gene expression occur when plants are exposed to cold stress (Guy et al., 1985; Thomashow, 1999). In microarray-based analysis of the Arabidopsis transcriptome, it has been estimated that between 4% [(Lee et al., 2005)—exposure to cold of 24 h] and 20% [(Hannah et al., 2005)—exposure to cold for up to 14 days] of the genome is cold regulated. In a microarray study of spring and winter wheat varieties, Monroy et al. (2007) reported there to be c. 8% of features that showed altered levels of expression in response to cold (>2-fold change). However, in this latter case, the features on the array were highly selected to represent regulatory genes and genes involved in signal transduction and so are not directly comparable with results using more general array platforms. It has also been shown that both up- and down-regulation of gene expression occur, but that, generally more genes are up-regulated than down-regulated. In Arabidopsis, Fowler and Thomashow (2002) reported that of 302 genes found to be cold responsive, 88 (27%) decreased in abundance.
In our study, using the Affymetrix GeneChip Wheat Array (62 000 features representing approximately 55 000 transcripts), the number of transcripts changing after a cold shock (2 days at 4 °C) was broadly similar for all three varieties: with 2.85% (Harnesk), 3.46% (Paragon) and 2.30% (Solstice) of the wheat genome as represented on the array showing a greater than twofold change (Figure 1a). Overall, this represents 3113 (up = 1711, down = 1402) features on the array that, for at least one of the cultivars, indicated a response to a cold shock. However, relatively few of these transcripts showed a common response profile in all three varieties (Figure 1b). One might tend to assume that the transcripts that did respond in a similar fashion in all three varieties (394 transcripts) are from genes involved in basal responses to cold; i.e., they may be the genes that determine basal responses to chilling. The responses that united the two winter varieties but distinguish them from Paragon (217 transcripts) would more likely be those that determine hardiness and the ability to tolerate freezing conditions; they might also be part of the armoury for providing better chilling tolerance.
Figure 1. (a) Statistically significant two-fold or greater changes (P = 0.05) in transcript abundance in plants exposed to a 4 °C for 2 days. (b) Venn diagram showing the number of genes expressed in common between the three varieties after exposure to a ‘cold shock’.
Download figure to PowerPoint
Surprisingly, these simple assumptions are not completely borne out by a study of the two gene lists. Notably absent from the list uniting Harnesk and Solstice, the two winter varieties, were many of the antifreeze proteins (see later section) that obviously play a major role in freezing tolerance, while transcripts for ice recrystallization inhibitors were induced in all three cultivars when one might not have expected to see them in Paragon, the spring variety.
As highlighted in several recent articles (Fowler, 2008; Ganeshan et al., 2008; Campoli et al., 2009), a weakness of the majority of research to date is that it has been based on responses to rapid, dramatic changes in temperature that do not in any way represent conditions found in nature. In such studies, plants have been directly transferred from favourable conditions for active growth (c. 20 °C) and placed at low nonfreezing temperatures (usually 4 or 2 °C)—our ‘shock’ experiment was of this kind and permitted us to make comparisons with the results from other such studies. The changes observed under such conditions are unlikely to truly reflect those that occur when plants experience a gradual decline in light and temperature more typical of the change from autumn to winter. Gene lists of candidate cold-responsive genes obtained from such cold-shock studies might also be misleading, therefore. For instance, in our shock experiment, we saw high levels of induction of some of the early light-inducible proteins (ELIPs) that might have been interpreted as a cold response given no other information. However, when plants were exposed to a slow decline in temperature and light, little or no response was seen from these genes (Figure 2). Experimental design, therefore, is fundamentally important in being able to identify candidate cold-responsive genes. A great deal of attention has been paid to the events occurring when plants are exposed to a rapid fall in temperature. Much less attention has been directed towards the elucidation of the molecular mechanisms underlying responses to gradual changes in ambient temperature that might be more representative of the conditions experienced during a typical autumn—winter progression.
Figure 2. An ELIP showing a distinct response to a ‘cold shock’, but no response to a slow decline in temperature. C, crown; L, leaf; H, Harnesk; P, Paragon; S, Solstice.
Download figure to PowerPoint
A further criticism of many of the studies carried out to date is that analysis has been performed on a single tissue—usually leaf tissue. However, Ganeshan et al. (2008) clearly show that cold-responsive genes are differentially expressed between different tissues (crown and leaf) and point out that analysing only the changes that occur in a single tissue will provide an incomplete picture of the events taking place in cold-treated plants. What is more, in winter cereals, it has been shown that whole-plant survival is dependent on the survival of specific tissues within the crown (Tanino and Mckersie, 1985; Livingston et al., 2006). The crown contains the meristematic regions from which all other tissues arise. The mature leaf tissue may well die back after suffering cold damage, but the immature, meristematic tissue of the crown must survive to re-establish growth when permissive conditions return. Our cold acclimation experiment was designed with both these criticisms in mind: temperature was gradually declined over several weeks, and leaf and crown tissue were assayed separately so that we could identify differential responses.
In our cold acclimation experiment, global changes in transcript abundance were markedly different between the two tissues, and between the spring and winter varieties (Figure 3a). That is, in comparisons of expression pattern between crown and leaf in any single variety, Harnesk and Solstice experienced many more changes in the leaves than in the crown. Paragon, the spring variety, showed the opposite relationship; that is, there were more changes in the crown than in the leaves. Comparing the expression patterns between the varieties, Paragon showed many more changes in crown tissue than the winter varieties and, conversely, showed many fewer changes in the leaf tissue. This last result is exactly opposite to what we saw in the cold-shock experiment. The cold-responsive genes in the two tissues were, in most cases, quite different (Figure 3b). Thus, the criticisms of experimental design put forward by Ganeshan et al. (2008) and Campoli et al. (2009) are supported by our results.
Figure 3. (a) Statistically significant two-fold or greater changes (P = 0.05) in transcript abundance in plants exposed to a gradual decline in temperature, light intensity and day length; (b) Venn diagrams showing the number of statistically significant two-fold or greater changes (P = 0.05) in transcript abundance in plants exposed to a gradual decline temperature, light intensity and day length. The values refer to genes that changed in expression during the period 21–63 days (all comparisons were made: 21–35, 35–63 and 21–63 days).
Download figure to PowerPoint
The differential response in crown tissue may probably be accounted for by the phase of growth in which the plants find themselves. Paragon, the spring wheat, possessed a highly expressed VRN1 gene and was by definition committed to flowering: evidence for this is given in Winfield et al. (2009). Thus, the meristems in the crown of this variety were undergoing the many changes associated with the transition from vegetative to floral growth. The winter varieties, on the other hand, initially had very low levels of VRN1 transcript and so would not have been committed to flowering. Although VRN1 transcript accumulated across the course of the experiment, vernalization requirement would not have been satisfied until its end. Thus, the crown tissue of the winter varieties remained in the vegetative phase with fewer morphological and physiological changes taking place.
Interestingly, although the crown tissue is vital in terms of over-winter survival because it is the site of spring regrowth, in Harnesk and Solstice the majority of significant changes in transcript abundance occurred in the leaves (3a and b). This might indicate that effector molecules are produced in the leaves and then transported to the crown or that the protein products of only a relatively small number of genes are required to protect the tissues of the crown. However, given the much greater number of genes changing in the leaves of winter wheat varieties compared to Paragon, it would appear clear that their different capacity to cold acclimate and tolerate chilling and freezing temperatures is underpinned by the degree of transcriptome reprogramming that they are able to bring about as temperature drops.
The number of transcripts showing changes in abundance was much greater between the fifth and the 9th weeks than between the third and 5th weeks. This may simply be an artefact of the criterion used for selection of genes (i.e., they require to show a twofold change to be called) and although genes were induced early they had not accumulated above the threshold, or it may genuinely show that many genes were induced later in the time course. The latter interpretation seems the more likely because in the earlier part of the experiment, temperatures may not have fallen below the threshold required for induction of cold-responsive genes—by the 5th week average day/night temperature was 12 °C. That is, as temperatures gradually fall over an extended period, as might occur during autumn, plants respond by initiating a series of events that put in place those mechanisms required to protect them from potential damage. The temperature at which these events are initiated, the threshold temperature, is well above freezing and quite different between species and different cultivars of the same species (Fowler, 2008). For example, the cold-hardy rye cultivar Puma has a threshold temperature of 18 °C. Norstar, a winter wheat, has an inductive threshold of c. 15 °C and Manitou, a spring wheat, an inductive threshold of c. 8 °C (Fowler, 2008). This has the important corollary that hardy species/cultivars begin preparing for the stresses of winter earlier than tender species/cultivars. However, plants cannot fully acclimate until temperature drops well below the induction threshold, and the rate of acclimation is inversely proportional to temperature (Fowler, 2008; Ganeshan et al., 2008, 2009). The capacity to acquire freezing tolerance is closely associated with a requirement for vernalization, and maximum freezing tolerance is attained when plants are fully vernalized.
After this general overview of the global changes in the transcriptome that occur as part of the process of cold acclimation and the acquisition of chilling and freezing tolerance, we will look at specific issues, such as signal perception and signal transduction, and will indicate where our studies give support or otherwise to the held conceptions. This is not always possible, of course, because not all are under transcriptional control. For instance, changes in the transcriptome occur in response to the cold stimulus, and therefore cannot be part of the initial perception itself. Thus, in the initial discussion of stimulus perception, little can be added from our studies. Figure 4 provides a schematic representation of the main points that are touched upon in this review.
Figure 4. Schematic representation of the cold response in plant cells. The coloured rods in the nucleus represent genes; those with the coding region coloured blue are transcription factors, those coloured green represent genes for effector molecules. Each gene is represented with only one cis-acting promoter region, but several may be present. Ice1 is a constitutive protein that is localized to the nucleus; post-translation mechanisms are involved in its activation (see Chinnusamy et al. (2007) for a review of this pathway). Membrane-bound kinases (RLK) might be involved in signal transduction as a result of the mechanical stress resulting from membrane rigidification. The annexins (An) and calcium-binding proteins (CBP) are activated by binding calcium (•). ABRE, ABA response element; CBP, calcium-binding proteins, CCH, calcium channel; GLU, glutathione; KIN, kinases and phosphatases; RLK, receptor-like kinase, ROS, reactive oxygen species.
Download figure to PowerPoint
Calcium as the secondary messenger
Whatever the actual mechanism of perception, one of the earliest consequences of detecting change in temperature is thought to be Ca2+ influx into the cytosol (Chinnusamy et al., 2006; Kaplan et al., 2006a). This may be mediated through membrane rigidification–activated mechanosensitive Ca2+ channels and/or induced through the presence of stress-induced reactive oxygen species. That Ca2+ influx is an important initial event in the monitoring of temperature change that has been shown through experiments in which the administration of calcium chelators and calcium channel blockers has been shown to prevent cold acclimation (Monroy et al., 1997). The spatial and temporal patterns of Ca2+ influx are although to be characteristic for particular stimuli and are referred to as Ca2+ signatures (DeFalco et al., 2010). The information contained in these characteristic Ca2+ signatures is interpreted through an array of Ca2+ -binding proteins (CBPs) that act as Ca2+ sensors (Kaplan et al., 2006a). The three main Ca2+ sensors in plants are calmodulin (CaM) and calmodulin-like proteins (CMLs), calcium-dependent protein kinases (CDPKs) and calcineurin B-like proteins (CBLs). These proteins, which contain a highly conserved EF-hand motif that binds Ca2+, have been shown to participate in the orchestration of calcium-directed signal transduction networks (Yang et al., 2004). As a consequence of binding Ca2+, CBPs undergo a conformational change that enables them to interact with and regulate (activate or inactivate) target proteins (DeFalco et al., 2010). In turn, these downstream effectors initiate a series of events that results in the large-scale reprogramming of gene expression that is seen in cold acclimation. It would be fascinating, therefore, if one could identify specific CBPs that are potentially involved in the initiation of specific cold-related gene cascades. Unfortunately, at the transcriptional level, we cannot draw any clear conclusions about any particular CBP. In our experiments, transcripts identified as CBPs behaved in a range of different ways, many of which were not indicative of them being involved in cold responses. However, given their importance in myriad signal sensing and transduction mechanisms, it is not surprising that we observed a range of different responses among the various CaM and CaM-binding proteins that were assayed on the array. Some, such as CaM4-1, showed up-regulation only in the leaf tissue of Paragon in the cold acclimation experiment and so are unlikely to play a role in cold acclimation. Several features on the array identified as putative calmodulins behaved similarly in all three varieties: that is, in the cold acclimation experiment they were up-regulated between week 3 and week 5 and then by week 9 had returned to their initial level: in the cold-shock experiment these transcripts also increased. Another putative calmodulin-binding protein was differentially up-regulated in the leaf tissue of the two winter varieties given a slow decline in temperature, but showed no response to a cold shock. One particularly interesting CLP showed a large decline in transcript abundance in all three varieties in the shock experiment (15- to 30-fold decline) and very little response under any other condition: a β-glucanse gene and WINV2 (an invertase gene) were two of the very few genes that were coregulated with this.
Cold usually precedes freezing in nature and induces many physiological and biochemical changes in the cells of freezing-tolerant plant species that enable them to survive unfavourable conditions. Low temperature affects water and nutrient uptake, membrane fluidity and protein and nucleic acid conformation, and dramatically influences cellular metabolism either directly through the reduction in the rate of biochemical reactions or indirectly through the large-scale reprogramming of gene expression. A large number of low temperature–induced genes have been identified and characterized in plants (Tsuda et al., 2000; Zhang et al., 2009) and are referred to as Late Embryogenesis–Abundant (LEA), Dehydrin (DHN), Responsive To Abscisic Acid (RAB), Low Temperature–Responsive (LT) and Cold-Responsive (COR) genes. As a majority of these genes belong to the Lea family that commonly encode highly hydrophilic proteins, they are usually referred to as COR/LEA genes or simply COR genes. A positive correlation exists between the level of COR gene expression and that of freezing tolerance (Grossi et al., 1998; Baldi et al., 1999; Ohno et al., 2001; Vagujfalvi et al., 2003). For example, the over-expression of the wheat COR/LEA protein WCS19 in Arabidopsis improves freezing tolerance, although only of cold-acclimated leaves (Dong et al., 2002).
Among these gene products, many are structural proteins that are directly involved in protecting the plants from stress (e.g., protein chaperones, osmoprotectants, ice-binding proteins), while others are regulatory genes (e.g., transcription factors, protein kinases and enzymes involved in the synthesis of plant hormones) (Table 1). Individual transcription factors are thought to control many target genes through direct binding to cis-acting elements in the promoter regions of the target genes. The transcription factors and the genes controlled by them are collectively referred to as a ‘regulon’. One of the most studied regulons involved in cold responses is the CBF regulon driven by CBF transcription factors.
Table 1. Categories of genes induced by cold stress
|Genes induced by cold stress|
|Kinases||Enzymes of fatty acid metabolism|
|Phosphatases||Enzymes of osmolyte biosynthesis|
|Transcription factors||LEA proteins|
| ||Lipid transfer proteins|
| ||mRNA-binding proteins|
| ||Protease inhibitors|
| ||Water channel proteins|
A common feature of cold acclimation is the rapid induction of genes encoding CBF-like transcription activators (Jaglo et al., 2001; Thomashow, 2001; Thomashow et al., 2001). In Arabidopsis, these are named CBF1, CBF2 and CBF3 (rather confusingly, these are also referred to as dehydration-responsive elements and named DREB1b, DREB1c and DREB1a, respectively). A role for CBF genes in the enhancement of freezing tolerance has been established through over-expression experiments. Constitutive expression of the CBF genes in transgenic Arabidopsis plants results in the induction of COR gene expression and an increase in freezing tolerance without a low-temperature stimulus (Jaglo-Ottosen et al., 1998; Gilmour et al., 2000). Significantly, multiple biochemical changes that are associated with cold acclimation and thought to contribute to increased freezing tolerance, such as the accumulation of simple sugars and the amino acid proline, occur in non-acclimated transgenic Arabidopsis plants that constitutively express CBF3 (Gilmour et al., 2000). Thus, it has been proposed that the CBF genes act to integrate the activation of multiple components of the cold acclimation response (Gilmour et al., 2000). This is referred to as the CBF regulon. The CBF regulon has been extensively studied in Arabidopsis (Nakashima and Yamaguchi-Shinozaki, 2006) and has been shown to be present in many species, both dicots and monocots (Dubouzet et al., 2003; Takumi et al., 2003; Kume et al., 2005; Oh et al., 2007). Even plant species that suffer damage at chilling temperatures and that are completely unable to tolerate freezing, such as tomato, maize and rice, also possess components of the CBF cold-response pathway (Jaglo et al., 2001; Nakashima and Yamaguchi-Shinozaki, 2006). The CBF transcription factors, which are members of the larger AP2/EREBP family of DNA-binding proteins (Campoli et al., 2009), recognize the cold- and dehydration-responsive DNA regulatory element designated the C-repeat/dehydration-responsive element (CRT/DRE). These elements, which have a conserved 5-bp core sequence of CCGAC, are present in the promoter regions of many cold- and dehydration-responsive genes. The CBF genes are induced within 15 min in plants being exposed to low non-freezing temperatures, and after about 2 h one begins to see the induction of cold-regulated genes that contain the CRT/DRE regulatory element (Gilmour et al., 1998).
The CBF genes belong to a multigene family that has been divided into several groups. Plants belonging to the Poaceae (the grasses) contain CBFs that have been classified into ten groups, the members of which share a common phylogenetic origin and similar structural characteristics. Six of these groups (IIIc, IIId, IVa, IVb, IVc and IVd) are found only in the Pooideae (a subfamily of the grasses that contains the temperate cereals wheat, barley and rye). In wheat, there are up to 25 different CBF genes (Badawi et al., 2007): a cluster of these genes is found on the long arm of homoeologous group 5 chromosomes. This corresponds with a major QTL for frost resistance, the Fr2 locus. Expression studies reveal that five of the Pooideae-specific groups (CBFIIId, IVa, IVb, IVc and IVd) display higher constitutive and low temperature–inducible expression in winter cultivars compared to spring cultivars (Badawi et al., 2007; Sutton et al., 2009). The higher constitutive and inducible expression within these CBF groups may play a predominant role in the superior low-temperature tolerance capacity of winter cultivars and is possibly the basis of genetic variability in freezing tolerance within the Pooideae subfamily.
In our studies, we saw a range of responses from CBF genes—on the array, there were features representing 18 different CBFs. Five showed no response at all in either of the two experiments (CBFII-5, CBFIIIc-D3, CBFIIIc-B10, CBFIIId-15 and CBFIVd-D22). The other thirteen CBF genes showed a response under one or both of the experimental conditions (Table 2). In the shock experiment, only CBFIVa-A2 and CBFIVb-D20 were differentially expressed between spring and winter varieties; they accumulated in Harnesk and Solstice but not in Paragon. In the cold acclimation experiment, several transcripts responded differentially between the winter and spring cultivars (Table 2). The most dramatic differential response was for CBFIIId-12: in the crown tissue of the two winter wheat varieties transcript increased more than 10-fold, while in Paragon it showed no response. It showed no response in leaf tissue of any of the three varieties. Two transcripts, CBFIVd-B9 and CBF1 responded only in Paragon during the cold acclimation experiment; that is, in leaf they increased over the course of the experiment.
Table 2. The response of CBF genes in plants exposed to a slow decline in temperature compared to those occurring when plants experience a cold shock
Surprisingly, there were no other transcripts with similar profiles (90% similarity) of accumulation to those of any of the CBF transcription factors. Therefore, there appears to be no direct correlation between accumulation of CBFs and the genes that they control. The CBFs for which no response was observed might either be involved in pathways unrelated to cold stress, or they may have accumulated in a rapid, transient fashion, or they may be controlled in a non-transcriptional fashion.
The considerable cross-talk that occurs between temperature-regulated and light-regulated pathways (Franklin, 2009) has been shown to occur in the expression of the CBF regulon. Some of the CBF transcription factors have been shown to be regulated in a light-dependent, diurnal fashion under growth at 20 °C (Badawi et al., 2007). In addition, it has been reported that light quality signals (red/far red ratio), mediated through the phytochromes and cryptochromes, regulate the expression of the CBF regulon (Franklin and Whitelam, 2007).
In Arabidopsis, a major gene acting upstream and controlling the expression of the CBF regulon is ICE1 (INDUCER OF CBF EXPRESSION 1). The product of this gene is a MYC-type basic helix–loop–helix transcription factor that binds MYC recognition sites (the ICE1-box) in the promoter of CBF3 and induces its expression. The ice1 mutant is defective in the cold induction of CBF3, is sensitive to chilling stress and completely unable to cold acclimate (Chinnusamy et al., 2007). Conversely, the constitutive over-expression of ICE1 in transgenic Arabidopsis enhanced the expression of CBF2, CBF3 and COR genes during cold acclimation and increased freezing tolerance. ICE1 is a constitutively expressed gene and post-translation modification of its protein product, which is localized to the nucleus, is required for CBF induction. A similar mechanism is probably present in other species, because over-expression of ICE1 in transgenic rice improves cold tolerance (Xiang, 2003), and ICE1-like genes (TaICE41 and TaICE87) that have been shown to bind the MYC elements in the promoters of certain CBF genes have been found in wheat (Badawi et al., 2008). The over-expression of either TaICE41 or TaICE87 in transgenic Arabidopsis enhanced freezing tolerance, although only upon cold acclimation. The increased freezing tolerance in transgenic Arabidopsis was associated with a higher expression of the cold-responsive activators AtCBF2 and AtCBF3 and of several cold-regulated genes. Unfortunately, there is no probe set for ICE-like genes on the Affymetrix wheat array, so we were unable to monitor its abundance in our experiments. However, these are reported to be constitutively expressed (Badawi et al., 2008), so we may not have observed cold-related change in their abundance.
Although the CBF regulon appears to be one of the main regulatory pathways involved in cold acclimation, and it is certainly the most studied, it is by no means the only one. In Arabidopsis, for example, only about 12% of all cold-induced genes are thought to be responsive to the CBF regulon (Chinnusamy et al., 2007), while in wheat at least one-third of the genes induced by cold are not responsive to CBF regulation (Monroy et al., 2007). Obviously, there must be additional regulatory mechanisms involving other transcription factors and their regulons (Fowler and Thomashow, 2002; Vergnolle et al., 2005).
WRKY transcription factors
WRKY transcription factors are members of a large gene family that includes 74 members in Arabidopsis and over 100 in rice (Berri et al., 2009). They are found almost exclusively in plants, although they are also found in some green algae (Eulgem et al., 2000). They are characterized by the presence of one or two highly conserved 60 amino acid WRKY domains which contain a zinc finger motif that provides DNA binding; on the basis of the number and nature of their zinc-finger motifs, the genes are assigned to three separate groups. The WRKY domain binds sequence specifically to the W Box DNA element (C/T)TGAC(C/T) of target genes, which are defined as elicitor-responsive elements. Several defence-related genes in plants have over-representation of W boxes in their promoters—WRKY genes themselves have W boxes in their promoters and may be self-regulated to some degree. WRKY transcription factors have been reported to be involved in various physiological programmes and, in addition, to respond to pathogen attack. However, more recently they have been shown to be involved in responses to abiotic stimuli (Mare et al., 2004), and it has been reported that WRKY transcription factors may be involved in cold hardening in wheat (Talanova et al., 2009).
We saw evidence for cold induction of some WRKY transcription factors in our study. In the cold acclimation experiment, sequences identified as WRKY5 and WRKY10 showed transcript accumulation in the leaf tissue of Harnesk (over 40-fold) and Solstice (c.20-fold), but no change in Paragon (Figure 5). Neither of these transcription factors was induced after 2 days of a cold shock. This is worth of note, because these transcription factors would not have been evidenced under the experimental conditions where a short cold shock was applied. This might explain why these transcription factors have been so little studied with respect to cold acclimation. Interestingly Talanova et al. (2009) identified a WRKY transcription factor that responded rapidly (within 15 min) and dramatically (‘by a factor of several tens’) upon plants being placed at 4 °C; thereafter, over a period of several days, the transcript returned to basal levels. Obviously, we would not have observed this change in our cold acclimation experiment. Thus, there may be several different WRKY transcription factors that control different sets of genes involved in response to cold.
Figure 5. Profile of accumulation for the transcription factor WRKY5 (red line) and the transcripts that were coregulated with it (>90% identity). C, crown; L, leaf; H, Harnesk; P, Paragon; S, Solstice.
Download figure to PowerPoint
In our studies, several genes with obvious roles as stress-related effector molecules were co-regulated with the WRKY transcription factors (Figure 5). Perhaps, the most significant of these co-regulated transcripts were some of the glucanases, chitinases and thaumatin-like proteins that have been shown to play a significant role as effectors in freezing tolerance (see later section and Figure 6). Additionally, the following transcripts were also up-regulated in a similar fashion to some of the WRKY transcription factors: a Mlo3-like protein (Mlo3 in barley is a transmembrane protein involved in defence against fungal attack), gibberellin pathway paralogues that might play a role in signal transduction, and an oxalate oxidase-like protein (a germin—see later section) that could play a role in scavenging of reactive oxygen species.
Figure 6. Profile of transcript abundance for (a) glucanase Glb2b and (b) ice-recrystallization inhibitor 2. The two genes showed similar profiles of accumulation in the ‘cold acclimation experiment’ but only IRI responded to a cold shock.
Download figure to PowerPoint
Cold and freezing conditions give rise to several stresses in addition to their direct effect on biochemical reactions and the physical damage caused by ice formation. Thus, cold-induced effector molecules are quite varied in their respective functions (Table 1): osmoregulants—sugars, proline that may act to stabilize cell membranes (lipid metabolism, membrane proteins); chaperones that act to protect proteins from cold-induced structural change; inhibitors of ice formation; photosynthetic enzymes involved in establishing homoeostasis between photosystems I and II and the biochemical reactions of the Calvin cycle; enzymes involved in the up-regulation of respiration (Cook et al., 2004); reactive oxygen species scavengers.
A major consequence of cold stress is dehydration and osmotic stress, and several of the COR genes are dehydrins. Dehydrins are a distinct biochemical group of LEA proteins (known as LEA D-11 or LEA II) characterized by the presence of a lysine-rich amino acid motif, the K-segment (Allagulova et al., 2003; Kosova et al., 2007). They are highly hydrophilic, soluble upon boiling and rich in glycine and polar amino acids. Their expression is induced by various environmental factors—heat, drought, salinity—that cause cellular dehydration (Kosova et al., 2007). Extreme cold and frost can also lead to osmotic stress, and it has been shown that the induction and accumulation of dehydrins is an important part of the cold acclimation apparatus of winter cultivars of the cereals (Stupnikova et al., 2002, 2004; Borovskii et al., 2005). It is thought that they can act either as emulsifiers or chaperones in the cells, protecting proteins and membranes against unfavourable structural changes caused by dehydration. They have also been shown to bind to mitochondrial membranes in a seasonal-dependent manner: during the winter they accumulate, while during the spring they decline in abundance (Borovskii et al., 2005). In our experiments, we saw very high induction (up to 40-fold increase) of some of the dehydrins. These increased in both tissues of all three varieties under both sets of experimental conditions, but in the cold acclimation experiment they accumulated less in the leaves of Paragon than in the leaves of the two winter varieties.
The well-characterized wheat cold-specific (WCS120) gene family belongs to the Cor/Lea superfamily (Fowler et al., 2001). The WCS120 protein family members share homology with the Lea D11 dehydrins (Thomashow, 1999; Kosova et al., 2007). As shown by biochemical, immunohistochemical, molecular and genetic analyses, this gene family is specific to the Poaceae (Sarhan et al., 1997). They encode a group of highly abundant proteins ranging in molecular weight (MW) from 12 to 200 kDa; among these, the five major members, WCS200 (MW = 200 kDa), WCS180 (180 kDa), WCS66 (66 kDa), WCS120 (50 kDa) and WCS40 40 kDa), are inducible by cold treatment (Sarhan et al., 1997). Members of the WCS120 family of proteins are thought to play a significant role in frost tolerance because of their higher induction in winter-hardy compared to tender spring wheat plants (Vitamvas et al., 2007; Vitamvas and Prasil, 2008). Indeed, because of their abundance it has been suggested that the WCS120 proteins could serve as molecular markers for frost tolerance in the gramineae (Houde et al., 1992). Unfortunately, of the various members of the WCS120 family, only WCS66 has a probe set on the Affymetrix Wheat Array GeneChip. The WCS66 transcript accumulated in both the cold-shock and cold acclimation experiments, but accumulated to a greater degree in the leaves of winter wheat (12-fold and 5-fold in Harnesk and Solstice, respectively) than in the spring wheat (two-fold). In crown tissue, statistically significant differences in accumulation pattern were not observed.
A cereal-specific protein, Wheat Low Temperature–Responsive 10 (WLT10), that is induced by cold has been shown to differentiate hardy and tender wheat cultivars (Ohno et al., 2001). A freezing-tolerant winter cultivar, M808, accumulated mRNA more rapidly and over a longer period than a tender spring variety (Chinese Spring). The increase in transcript abundance was temporary, but the peak occurred at the time when maximum freezing tolerance was attained (at 3 days under a cold-shock regime). Interestingly, the transcript was reported to accumulate to different levels under different light/dark regimes, once again indicating the importance of light in the perception of cold. In our cold acclimation study, WLT10 transcripts accumulated principally in the leaf tissues with some evidence of slightly greater accumulation in the two winter varieties than in Paragon (13- to 15-fold increase in the winter varieties compared to a six-fold increase in Paragon). Induction occurred after the 5th week, there being a small decline in abundance prior to this. In the cold-shock experiment, there was a dramatic and similar increase in all three varieties.
Oxygen free radicals
Cold or chilling stresses have a dramatic effect on plant metabolism causing the disruption of cellular homeostasis and the uncoupling of major physiological processes leading to the accelerated formation of oxygen-based free radicals (Suzuki and Mittler, 2006). These radicals are toxic molecules capable of disrupting cell function, and they may even cause sufficient damage to result in cell death. Chloroplasts are highly sensitive to damage by the reactive oxygen species (ROS) that are generated by the reaction of chloroplastic O2 and the electrons that escape from the photosynthetic electron transfer system (Foyer et al., 1994). Cells possess antioxidants and antioxidative enzymes capable of interrupting cascades of uncontrolled oxidation in cellular organelles. Oxidative stress results from the imbalance between the formation of ROS and their neutralization by antioxidants. Various processes disrupt this balance by increasing the formation of free radicals in relation to the available antioxidants (Talukdar et al., 2009). Under optimal conditions for growth, ROS are produced at a low level, but during stress their rate of production is greatly increased. The accumulation of enzymes and metabolites that cooperatively scavenge ROS is thus an important part of the cold acclimation process (Tao et al., 1998). Antioxidants such as ascorbic acid and glutathione, and ROS-scavenging enzymes such as superoxide dismutase (SOD), ascorbate peroxidise (APX), catalase (CAT), glutathione peroxidise (GPX) and peroxiredoxin (PrxR) are involved in stress-related removal of ROS. We observed changes in some of these genes. However, most showed no statistically significant change in abundance. Transcripts for various glutathione transferases and some peroxidases were interesting exceptions to this. Glutathione transferases, which are encoded by a large and diverse gene family in plants and perform a range of functions, may exhibit glutathione peroxide activity and may also play a role in stress-related signal transduction (Dixon et al., 2002a,b). Interestingly, in the cereals GSTs are constitutively very highly expressed, representing up to 2% of all protein in the leaves (Dixon et al., 2002b). This was clearly seen in our analysis, many of these genes having very high basal levels of expression in the leaves. They have also been reported to be transcriptionally controlled and to be induced by various abiotic stresses [see review by Dixon et al. (2002b)]. In our studies, some GSTs increased exclusively in the leaves of the two winter varieties, while others accumulated to a greater extent. Therefore, they may well be involved in the response to cold stress and be part of the mechanism to remove ROS.
Flavonoids are secondary metabolites derived from phenylalanine and acetate metabolism that perform a variety of essential functions in higher plants including playing an important role as antioxidants (Winkel-Shirley, 2002). Chalcone synthase and chalcone isomerase are key enzymes in flavonoid biosynthesis and in our experiments showed differential patterns of transcript accumulation between the winter and spring varieties. In the cold acclimation experiment, the transcript for naringenin-chalcone synthase exhibited a winter wheat–specific increase in leaf tissue (20-fold in Harnesk and only three-fold in Solstice, but this had a much higher initial basal level). A putative UDP-glucose: flavonoid 7-O-glycosyltransferase showed a similar profile of accumulation, while a transcript for a chalcone isomerase–like enzyme declined in the leaves of the winter varieties. These profiles might be indicative of the involvement of the flavonoid pathway in cold stress responses. The transcript for a chalcone isomerase–like gene was constitutively much more highly expressed in the two winter varieties (c. 20-fold) than in Paragon; however, it showed down-regulation in all three cultivars in both experiments.
In red beet (Beta vulgaris), it has been found that a 5-O-glucosyltransferase (GT), an enzyme involved in the synthesis of the pigment betacyanin, is induced by wounding, bacterial infiltration and as a consequence of oxidative stress (Sepulveda-Jimenez et al., 2005). They concluded that ROS act as a signal to induce BvGT expression, necessary for betanin synthesis and that betacyanins act as ROS scavengers. We observed differential expression of betanidin-5-O-glucosyltransferase between the spring and winter varieties with marked accumulation (up to 20-fold) in the leaf tissue of the latter only. A particularly interesting set of genes appeared to be co-regulated with this (Pearson correlation of at least 95%): there were several glucanases, which are thought to act as antifreeze proteins (see next section), a number of transcripts for pathogen-related proteins and genes for other enzymes involved in pigment biosynthesis that might themselves play a role in ROS-scavenging or ROS-induced signal transduction.
Once cold-acclimated, cold-hardy cultivars of wheat are able to tolerate temperatures as low as −25 °C (Yoshida et al., 1997), while some of the forage grasses are able to withstand temperatures as low as −30 °C (Moriyama et al., 1995). Under freezing conditions, cell membranes are thought to be the main sites of injury (Thomashow, 1999; Uemura et al., 2006). Freezing tolerance, therefore, is closely related to the mechanisms by which plant cells avoid injury to the cellular membranes (Uemura et al., 2006; Yamazaki et al., 2009). A major part of this depends on the capacity to withstand extracellular ice formation and the ability to prevent its formation within the cell. Extracellular freezing results in freeze-dehydration because of the movement of water from the cytoplasm to the growing ice crystals and freeze-induced dehydration is thought to be the major factor causing injury to the plasma membrane (Yamazaki et al., 2009). Ice formation also produces mechanical stress with deformation and apposition of cellular membranes that can lead to cell rupture and loss of semi-permeability. A key pre-emptive function of cold acclimation, therefore, is to put in place mechanisms to stabilize membranes against potential freezing injury (Uemura et al., 2006; Yamazaki et al., 2009). This includes the production of antifreeze proteins that either retard ice formation or limit its growth, osmoprotectants that protect membranes and proteins from the effects of dehydration and the modification of cell membrane composition. The best studied of these mechanisms is that related to the inhibition of ice formation and growth through the production of a range of antifreeze proteins (AFPs).
Chitinases, glucanases, thaumatin-like proteins
As temperatures drop below freezing, ice formation initiates in the extracellular spaces and xylem vessels because the extracellular fluid generally has a lower solute concentration and consequently a higher freezing point than the intracellular fluid (Pearce, 1986; Pearce and Ashworth, 1992). During cold acclimation, freezing-tolerant plants accumulate antifreeze proteins in anticipation of the arrival of freezing conditions. These proteins, which principally accumulate in the apoplast (xylem-lumena, cell wall and intercellular spaces), include a diverse range of proteins which have the common characteristic of being highly similar to pathogen-related (PR) proteins (Griffith and Yaish, 2004). These are the chitinases, glucanases and thaumatin-like proteins (Antikainen et al., 1996; Pihakaski-Maunsbach et al., 1996; Bishop et al., 2000; Stahl and Bishop, 2000; Griffith and Yaish, 2004). All three of these protein groups belong to large gene families, the members of which have undergone extensive evolutionary change and functional diversification (Bishop et al., 2000; Stahl and Bishop, 2000). Thus, they have evolved to perform many biological roles including responses to abiotic and biotic stress (Karlsson and Stenlid, 2008).
Pathogen-related proteins are released into the apoplast in response to infection and act together to enzymically degrade fungal cell walls and to inhibit the action of fungal enzymes. Similarly, antifreeze proteins are also targeted to the apoplast (Griffith and Yaish, 2004) and form complexes of various composition (Yaish et al., 2006). However, rather than interfering with the growth of pathogens, they have the capacity to bind to ice crystals and inhibit their growth (Moffatt et al., 2006). They also inhibit ice recrystallization, a process that occurs when temperatures fluctuate about that of freezing resulting in the migration of water molecules from small ice crystals to larger ones (Knight et al., 1984). Although it does not appear that chitinase- and glucanase AFPs contain a particular ice-binding domain, the characteristic that distinguishes them from pathogen-related proteins is their capacity to assume a three-dimensional structure that presents an ice-binding surface (IBS) (Yeh et al., 2000; Yaish et al., 2006). These bind ice crystals through hydrogen bonds and van der Waals forces, and in doing so inhibit their and growth and recrystallization (Yeh et al., 2000; Griffith and Yaish, 2004). Interestingly, some of these AFPs have also retained their capacity to interact with pathogens and are thought to provide a pre-emptive defence against cold loving (psychrophilic) fungi, such as the snow moulds, that can be a serious problem in the cultivation of forage and cereal crops (Hoshino et al., 2009).
In our study, the transcripts of several glucanases, chitinases and thaumatin-like proteins showed a winter wheat–specific increase in abundance totally consistent with a role as AFPs (Figure 6). That is, they showed a marked increase in abundance in the leaf tissue of the winter varieties, but no response in Paragon. In the leaf tissue of the two winter varieties, transcript abundance for TaGLB2b and TaGLB2b increased 160- and 57-fold, respectively: there was no response to a short cold shock.
Ice recrystallization inhibition proteins (IRI)
The two ice recrystallization inhibitor proteins, TaIRI-1 and TAIRI-2, belong to a class of ice-binding proteins thought to be specific to the grass subfamily, Pooideae, which includes wheat, barley and rye (Tremblay et al., 2005). These bipartite proteins contain a short N-terminal leucine-rich repeat (LRR) domain which shows homology to that of receptor kinases and a C-terminal repeat domain that shows homology to the ice-binding domains of other antifreeze proteins and that has been reported to exhibit strong ice recrystallization inhibitory properties (Sandve et al., 2008). In our experiments, transcripts for the two ice recrystallization inhibition proteins, TaIRI-1 and TaIRI-2, greatly increased in abundance as a consequence of both a gradual decline in temperature and a cold shock (Figure 6). This latter detail distinguishes them from the chitinase, glucanase and thaumatin-like AFPs that showed no statistically significant changes on exposure to a cold shock (Figure 6). Additionally, their induction came later than that of the other AFPs, occurring between the fifth and the 9th weeks, rather than showing initial induction between the third and 5th weeks. The most marked response was in the leaves, particularly in the case of the TaIRI-2 transcript, and there was much greater accumulation in the winter varieties than in Paragon. Interestingly, there were very few other transcripts that behaved in a similar fashion and so could be thought to be co-regulated. Among this group of genes, however, were WLT10, BLT14-1 and BLT14-2 proteins (which themselves are closely related to WLT10), a dehydrin, a xyloglucan endotransglycosylase and an undefined plasma membrane protein (Unigene code Ta.4222). Finally, we observed a winter wheat–specific response in leaf tissue for the transcript for a putative polygalacturonase inhibitor. A protein of this type has been reported to act as an ice recrystallization inhibitor (Worrall et al., 1998).
It has long been considered that the accumulation of compatible solutes (organic osmoprotectants) in the cytoplasm contributes to freezing survival by reducing the rate and extent of cellular dehydration, by sequestering toxic ions, and/or by protecting macromolecules against dehydration-induced denaturation (Steponkus, 1984). Carbohydrates, in particular, are recognized as playing an important role in freezing tolerance (Livingston et al., 2006), and the accumulation of simple sugars such as trehalose, raffinose and sucrose has been shown to be correlated with enhanced freezing tolerance (Wanner and Junttila, 1999; Pennycooke et al., 2003; Kaplan et al., 2006b). There have been several studies on the membrane stabilizing effect of various sugars suggesting a relationship between carbohydrate accumulation and freezing tolerance: trehalose (Crowe, 2002), raffinose (Pennycooke et al., 2003), sucrose (Hincha and Hagemann, 2004) fructans (Livingston et al., 2009). However, changes in sucrose levels have been shown to occur very rapidly—within 1 h at 4 °C—and this response did not appear to be driven by transcript abundance (Kaplan et al., 2007).
In Arabidopsis and Petunia, raffinose is reported to show cold-related accumulation (Wanner and Junttila, 1999; Pennycooke et al., 2003). This trisaccharide accumulates as a result of down-regulation of α-galactosidase, the enzyme responsible for its breakdown. Over the time course of our experiment, we observed a significant increase in galactinol synthase (GolS), the first enzyme in the pathway that leads to the synthesis of raffinose (Taji et al., 2002). GolS is involved in carbon partitioning between sucrose and raffinose, a process that might be important in producing simple sugars as osmoprotectants. It has also been reported that cold-stimulated synthesis of GolS is under the control of the key cold- and dehydration-responsive transcription factor, DREB1a (CBF3) (Taji et al., 2002; Maruyama et al., 2009), and that galactinol and raffinose scavenge hydroxyl radicals as part of their function to protect plants from the potential oxidative damage that may results from chilling (Nishizawa et al., 2008). There also appears to be a tendency of the two winter varieties to increase transcript levels for sucrose phosphate synthase, an enzyme that shunts carbohydrate away from starch synthesis and into sucrose accumulation. It is important to note that both GolS and SPS are members of small gene families, the members of which respond differently to any particular stimulus (Taji et al., 2002; Castleden et al., 2004). In our cold acclimation experiments, transcripts for SPS1, 2 and 5 did not show any change. Transcripts for SPS 7 and 9 increased in abundance as temperature dropped, and there was differential expression between the winter and spring varieties. Similarly, the transcript for sucrose synthase 1 and 2 accumulated in both experiments and in all three varieties used in the study. In the cold acclimation experiment, they accumulated to a greater degree in Harnesk and Solstice than in Paragon.
Annexins belong to a multi-gene family of multi-functional membrane- and Ca2+-binding proteins. All annexins are soluble proteins that contain a highly conserved calcium-binding domain and a variable N-terminal region. The characteristic feature of these proteins is that they can bind membrane phospholipids in a reversible, Ca2+-dependent manner. Their special feature is that they can behave as either cytosolic, peripheral or integral membrane proteins (Talukdar et al., 2009). They are principally cytosolic but, depending on local conditions of cytosolic free calcium, pH and membrane voltage, either attach to or insert into either plasma- or endomembranes (see reviews by Laohavisit and Davies (2009) and Talukdar et al. (2009)). They are thought to be involved in a diverse range of cellular functions. They may act as plant ion transporters that could account for channel activities in plasma membranes (Mortimer et al., 2008; Laohavisit and Davies, 2009). They may also operate in signalling pathways involving cytosolic free calcium and reactive oxygen species (Mortimer et al., 2008; Laohavisit and Davies, 2009; Talukdar et al., 2009). Some of these properties have been reported for animal annexins (Gerke and Moss, 2002), but have not been experimentally demonstrated in plants (Talukdar et al., 2009). There is the interesting possibility that they could act in ROS detoxification during oxidative stress and may also be involved in ROS-mediated cell signalling. Annexins have been shown to have a role in the generation and propagation of calcium signals in nodule formation in Medicago truncatula (Talukdar et al., 2009). Breton et al. (2000) identified four cold-induced annexins in wheat and showed that they are intrinsic membrane proteins: their association with the membrane was shown to be calcium independent. In general, a rise in cytosolic Ca2+ promotes relocation of annexins to membranes and as a consequence, they have been implicated in Ca2+-driven signal transduction. In our study, an annexin, highly similar to annexin p33 of Zea mays, accumulated preferentially in the leaves of the two winter cultivars.
Germins and germin-like proteins (GLPs)
The germins and GLPs belong to the cupin superfamily based on the possession of a highly conserved β-barrel motif involved in metal binding (Zimmermann et al., 2006; Davidson et al., 2009). They are thought to play roles in calcium regulation, oxalate metabolism and responses to pathogenesis. True germins show oxalate oxidase activity and are found only in cereals. GLPs, on the other hand, are a much more diverse group of proteins that are encoded by a heterogeneous group of genes present in many land plants including monocots, dicots, gymnosperms and mosses. GLP is a term referring either to all germin motif-containing proteins with unknown enzyme activity or to those that do not possess oxalate oxidase activity. Interestingly, about two-thirds of the germins and GLPs analysed by Davidson et al. (2009) showed responses to various biotic and abiotic stresses. They are all glycoproteins associated with the extracellular matrix and may either (i) have enzymic activity (oxalate oxidase or superoxide dismutase), (ii) be structural proteins or (iii) act as receptors. For a recent review, see Davidson et al. (2009).
Germins and GLPs in our study showed a range of responses that differentiated the two tissues and the spring and winter varieties. An oxalate oxidase precursor accumulated preferentially in the leaves of the winter varieties, while a second accumulated preferentially in the leaf tissue of Paragon. Others accumulated in the leaves of all three varieties, but to a lesser extent in Paragon than in either Harnesk or Solstice. These GLPs might therefore be involved in basal responses to cold temperature, with the greater accumulation in winter varieties determining, in part, their enhanced capacity to tolerate extreme temperatures. | http://onlinelibrary.wiley.com/doi/10.1111/j.1467-7652.2010.00536.x/full | 13 |
74 | In flotation, the buoyant force equals the weight of the floating object and the volume of the object is always greater than the volume of water displaced. Floatation can be calculated using Archimedes' Principle.
Alright so let's talk about flotation. Flotation is a major application of Archimedes Principle and it's very different from just the simple idea of buoyancy because when you have flotation, you always have the buoyant force equaling the weight. And that's true for a very simple reason, if something's floating then its weight got to be canceled by the buoyant force and remember what the buoyant force, it's just that net force that the fluid puts on the solid when you try to immerse it in the fluid. Alright, so when we have a floating object the amount of volume that's displaced is always going to be less than the total volume of the solid. Because if it was equal then the solid would be entirely immersed and I wouldn't be correct in calling it floating, and there's no way in which I could displace a greater amount of volume than the volume of the solid. I mean the solid is not going to carrying air with it you know and if it was then I would call that air part of the solid too.
Anyway so the volume displaced must always be less than the total volume of the solid. Okay so flotation problems usually are asking 1 or 2 things. What's the fraction of the volume that's submerged or what's the fraction of the volume that is visible. Alright so let's look at a sample problem. What fraction of the volume of an iceberg is visible above the ocean water? Alright now in order to solve this problem we're going to use that ratio idea that we've seen before but we can derive it again here. So we're going to say the buoyant force divided by the weight which of course buoyant force is equal the weight so this ratio got to be 1 right. So the buoyant force divided by the weight will be the density of the fluid, volume displaced times gravity divided by the density of the solid. The total volume times gravity.
And that gives us a very, very important relationship that we can always use when something is floating as long as it's a uniform substance. We have that the density of the fluid divided by the density of the solid is equal to the total volume over the volume displaced. So that means that the ratio of volume displaced to volume is the same as the ratio of density of the solid to the density of the fluid just by flipping that guy up-side-down. Alright, well I know the density of ice it's 917 kilograms per cubic meter for the density of the fluid we'll go ahead and use ocean water rather than pure water because it's a little bit different it's 1,025 alright. Now this is going to the fraction that is displaced, so that's the fraction underneath. But I didn't want the fraction underneath I wanted the part that was visible above, so we'll say well jeez the part that's visible above plus the part that's not visible that's beneath okay it's got to be 1. So that means that the answer, the volume visible divided by the total volume got to be equal to 1 minus density of the solid over density of the fluid.
And it turns out if you plug those numbers into a calculator that you'll get something like 10.6%. So that when you're driving your ocean liner through the Arctic cycle and you see an iceberg and you say well jeez that's a pretty big iceberg you should keep in mind that about 89.4% of that iceberg you can't see. Alright so there's one thing, let's go ahead and look at the next one. We've got a raft and it's 2.5 meters by 1 meter by 20 centimeters okay so that's giving us the volume of the raft and the weight of the raft is 30 Newtons. Now I've got an 800 Newton person who's going to go ahead and lay on that raft and I want to know how many centimeters of the raft are visible above the surface after I got this person laying down on the raft right?
And then after that I want to know what's the maximum weight that the raft can support without sinking alright let's see how this goes. So we've got our 800 Newton person and then we've got our 30 Newton raft. So that means the buoyant force, remember the buoyant force with flotation always equals the total weight it, has to. So this is going to be 830 Newtons, the weight of the person plus the weight of the raft. Alright, f buoyant is always equal to the density of the fluid times the volume displaced times gravity. So now I can just get the volume displaced. The volume displaced will be 830 Newtons divided by the density of the fluid which we'll take to be water times acceleration due to gravity which of course has 9.8 like it always is. So when we do this division we'll end up with 0.0847 cubic meters so that's how much of the volume of the raft is displaced by or is displacing the water, is actually underneath the water right?
But we want to know what height, how many centimeters of the raft alright? But we got to divide by the cross sectional area of the raft. So the raft looks something like this right, where this is 1 meter, this is 2.5 meters and this distance right here was 20 centimeters okay. So if we want to know how much, how many centimeters of the raft are under the water then we got to take the volume that's under the water and divide by this cross sectional area 2.5 meters by 1 meter. So we'll divide by 2.5 square meters and we'll have height displaced when we do that division we end up with 3.4 centimeters. So therefore, height visible, well most of it is visible 16.6 centimeters of the raft are visible above the water when you've got this 800 Newton person laying on it alright.
What about the maximum weight that this raft can support? Well at maximum the raft is going to go all the way down so that 20 centimeters will be displaced. So that means that my weight mg will have to equal the density of the fluid times the whole volume of the raft. Because now it's all the way submerged, with because this is the maximum times g and of course we can do that 10 to the third kilograms per cubic meter. The whole volume is nothing more than 2.5 times 1 times and I got to multiply by 20, no remember it's got to be SI unit so I have to express that in terms of meters so it'll be 0.2 so it'll be 2.5 times 0.2 cubic meters times 9.8 meters per second squared. And when you multiply all these out you'll end up with 4,900 Newtons. Now of course 30 Newtons of that is the weight of the raft, so that means that 4,870 Newtons which is a huge amount of weight can be supported by this raft. Alright so that's the way that, that problem goes.
Now I wanted to do one other problem that's just kind of a qualitative idea of the way that these things work. I really like these problems because they make you think about it but they don't have any numbers in them at all. You're just kind of looking at the way that it works. So let's say that we've got this raft right here and we're going to do something a little bit different with it. We're going to take this raft and we're going to put a lead ball on top of it and we're going to ask what happens to the level of water, well if we put that lead ball on top of the raft. That lead ball's weight is adding to the weight of the raft, but we said that the buoyant force has to equal the weight right it's still floating. So what that means is that this thing is going to displace more water it's going to go down a little bit further. And that's going to cause the water level to go up a little bit and that's because the raft is now displacing its own weight plus the weight of the lead ball.
Alright now what happens if we then take this lead ball and we drop it to the bottom of the pool? And we want to know now does the water level go up, go down or remain the same? Now you could go through and try to put in numbers and do all that that would be a lot of work. So let's just think about it in a really straight forward way. When the lead was on top of the raft, it was part of a floating object and that means that it was displacing an amount of water equal to its own weight. But now when I take that ball and I drop it into the pool so that it goes all the way down to the bottom now it's only displacing its own volume. Since the density of lead is greater than the density of water, when it's displacing its own volume it's not displacing as much water as when it's displacing its own weight. And that means that the water level will go down when I take the lead and I drop it into the water, because it's not displacing as much. Alright that's floatation. | http://www.brightstorm.com/science/physics/solids-liquids-and-gases/flotation/ | 13 |
56 | Educational Research On-Line
Variables and Scales of Measurement
See supplemental readings below.
A variable is simply anything that varies, anything that assumes different values or categories. For example, sex varies because there is more than one category or classification: female and male. Race is a variable because there is more than one category: Asian, Black, Hispanic, etc. Age is a variable because there is more than one category: 1 year, 2 years, 3 years, etc.
Conversely, a constant is anything that does not vary or take different values or categories. For example, everyone participating in this course is a student, so that is not a variable since it does not vary since it has only one category. As another example, consider a group of white females. With this group, neither race nor sex varies, so race and sex are constants for these people.
Exercise: Identifying Variables
In the following statements, identify the variables.
Scales of Measurement
Measurement is the process of assigning labels to categories of variables. Categories of variables carry different properties, which are identified below. If one can only identify categories, then that variable is referred to as a nominal variable.
If the categories of a variable can be ranked, such as from highest to lowest or from most to least or from best to worst, then that variable is said to be ordinal.
If the categories can be ranked, and if they also represent equal intervals, then the variable is said to be interval. Equal interval means that the difference between two successive categories are the same. For example, temperature measured with Fahrenheit has equal intervals; that is, the difference between temperatures of 30 and 31 degrees is 1 degree, and the difference between 100 and 101 degrees is 1 degree. No matter where on the scale that 1 degree is located, that 1 degree represents the same amount of heat. Similarly, when using a ruler to measure the length of something, the difference between 2 and 3 inches is 1 inch, and the difference between 10 and 11 inches is 1 inch -- no matter where on the ruler that 1 inch lies, it still represents the same amount of distance, so this indicates equal intervals. As another example, time in the abstract sense never ends or begins. Since time is measured precisely with equal intervals, such as one second, one minute, etc., it can be viewed as an interval measure in the abstract.
The last scale is ratio. This is just like interval, except that a variable on the ratio scale has a true zero point--a beginning or ending point. While time in the abstract (no ending or beginning) sense is interval, in practice time is a ratio scale of measurement since time is usually measured in lengths or spans which means time does have a starting or ending point. For example, when timing someone on a task, the length of time required to complete the task is a ratio measure since there was a starting (and ending) point in the measurement. One way to identify ratio variables is to determine whether one can appropriately make ratios from two measurements. For example, if I measure the time it takes me to read a passage, and I measure the length of time it takes you to read the same passage, we can construct a ratio of these two measures. If it took me 30 seconds and took you 60 seconds, it took you (60/30 = 2) twice as long to read it. One cannot form such mathematical comparisons with nominal, ordinal, or interval data. Note that the same can be done with counting variables. If I have 15 items in my pockets, and you have 5, I have three times as many items as you (15/5 = 3).
For most purposes, especially in education, the distinction between interval and ratio is not important. In fact, it is difficult to find examples of interval or ratio variables in education.
Below is a table that specifies the criteria that distinguishes the four scales of measurement, and the following table provides examples for each scale.
|Interval||categories, rank, equal, interval|
|Ratio||categories, rank, equal, interval, true zero point|
|Nominal||types of flowers, sex, dropout/stay-in, vote/abstain|
|Ordinal||socioeconomic status (S.E.S.), Likert scales responses, class rank|
|Interval||time in abstract (see discussion above), temperature|
|Ratio||age, weight, height, time to complete a task|
Classification of Variables
In research it is often important to distinguish variables by the supposed or theoretical function they play. For example, if one states that a child's intelligence level influences the child's academic achievement in school, then the variable intelligence thought to have some impact, some effect on academic performance in school. In this example, intelligence is called the independent variable and academic achievement is the dependent variable. The logic here holds that achievement depends, to some degree, upon intelligence, hence it is called a dependent variable. Since intelligence does not depend upon achievement, intelligence in this example is referred to as the independent variable.
Here are two methods for identify8ing independent variables (IV) and dependent variables (DV). First, think in terms of chronological sequence--in terms of the time order. Which variable comes first, one's sex or one's achievement in school? Most would answer that one is born with a given sex (female or male), so it naturally precedes achievement in school. The variable that comes first in the time order is the IV and the variable that comes afterwards is the DV.
A second method for identifying the IVs and DVs is the ask yourself about the notion of causality. That is, if one does this with variable A, then what happens to variable B? For example, if one could increase intelligence, then achievement in school may result. But, if one increased achievement school, would this have any logical impact on one's intelligence? In this example, intelligence is the IV because it can affect achievement in school, and achievement is the DV because it is unlikely to affect intelligence.
Alternative labels for IV are cause and predictor, and other labels for the DV are effect and criterion.
Often it can be difficult to properly identify whether a variable is nominal, ordinal, interval, or ratio. A simpler approach is to identify variables as either qualitative (or categorical) or quantitative (or continuous). A qualitative/categorical variable is one that has categories that are not ranked--i.e., a nominal variable. All other variables have categories that can be ranked, therefore the categories differ by degree. These variables are quantitative or continuous, and are represented by the ordinal, interval, and ratio scales.
For simplicity, variables that have only two categories, even if they can be ranked, will be referred to as qualitative variables since this will be important later when determining which statistical tests may be used for analysis.
Here is a practice exercise to help you distinguish between IV and DVs. Using the same practice exercise for IVs and DVs, also determine whether each IV and DV is qualitative or quantitative. To make these determinations, sometimes there will not be enough information about the measurement process--how the variables were actually measured. In these cases, it is important to consider the variable carefully to determine if the variable logically has ranked categories or not. If it appears to have ranked categories, then classify the variable as quantitative. See illustrated examples in the practice exercise for further clarification of this issue.
Dr. Jacqueline McLaughlin, Assistant Professor of Biology, and Jane S. Noel, Instructional Development Specialist, of Penn State University provide useful information on variables.
Wikipedia entry on scales of measurement (note IQ is identified as interval here; this entry is questionable)..
Ronald Mayer of San Fransico State University also discusses measurement.
Copyright 2000, Bryan W. Griffin
Last revised on | http://www.bwgriffin.com/gsu/courses/edur7130/content/variables_and_scales_of_measurement.htm | 13 |
246 | |Prof. Lucifer Gorganzola Butts|
mixes a drink with his
Rube Goldberg (1920)
® and © by Rube Goldberg, Inc.
Used by permission of RubeGoldberg.com.
Physics 101 For Perpetual Motion
By Donald E. Simanek
Here's a quick review of some of the elementary
physics principles most often misapplied.
This compilation is obviously not complete. Parts of this may be redundant
and even repetitious, but repetition often helps impress important points
on the mind. This is a checklist/reminder of things you need to know
thoroughly before embarking on analysis of any machine or mechanical system.
Consult a good elementary physics book for more details, and for examples.
Several readers have told me they used this with success as a review sheet for their final exam in first semester introductory physics.
This document began its life as a review of basic physics for perpetual motion
machine inventors, who often misapply elementary physics. For that reason it
emphasizes those mechanics principles applicable to machines. However, these
are basic to all of physics, and if not properly understood, can adversely
affect understanding of everything else in physics.
All of the principles reviewed here are based on the assumption that the analysis is being done in an inertial (non-accelerating) coordinate system.
More advanced physics courses extend or refine these principles using
calculus concepts, new formalisms, and more general coordinate systems.
Vector quantities are shown in boldface font.
Mathematics of scalars and vectors
- A scalar is a physical quantity whose definition does not in any
way depend on direction in space.
Scalars include time, mass, volume, temperature,
density and others. The size of a scalar quantity is represented as a number.
In physical equations, scalars obey the algebra of numbers.
- A vector is a physical quantity dependent on direction in space.
A vector has both size and direction, and both must be
specified to uniquely characterize the vector.
Vector quantities include displacement, velocity, acceleration, momentum,
angular momentum and others.
Since surface area has spatial orientation,
it is often be treated as a vector. Direction matters with vectors.
Two vectors are said to be equal only if their sizes
and directions are the same.
|Sum of three vectors.|
- Vectors in physical equations obey an algebra quite different from
that of ordinary scalar algebra. One useful and intuitive pictorial
representation of a vector
is an arrow. The length of the arrow represents the vector's size, and the
direction of the arrow represents the vector's direction in space.
With this representation we may illustrate relations between
- The size (or magnitude) of a vector quantity is a
scalar, i.e., a number.
The size of vector V is symbolized |V|.
(Note that the V is boldface, since it is a vector, but the
lightface "absolute value" brackets, | |,
indicate that the entire quantity is a scalar. I.e., the symbols tell us to take the
absolute value of a vector, which gives a scalar result.) Alternatively
we may simply write the vector's letter symbol lightface to indicate that it represents only the size of the vector.
- The sum of two vectors may be found by geometrically placing the
arrows representing the vectors head-to tail. The sum may then be
found by drawing a new vector from the free tail to the free head.
- The difference between two vectors A - B is found by adding
the vector A and the vector -B where -B
is a vector of the same size as B but opposite in direction.
- The projection of a vector onto a line is
where q is the angle between
the vector and the line and V is the size of the vector.
This may be found geometrically by
constructing lines perpendicular to the reference line and from the head and
tail of the vector. The length along the reference line lying between the
construction lines is the "projection of the vector along that line".
|Component of a vector on a line.|
- Components of vectors. When dealing with vectors algebraically
it's useful to represent a vector by its components. The component of a vector
is the projection of that vector onto a chosen coordinate axis.
The component of a vector is usually treated as a scalar quantity.
While the coordinate axis can be
any line, we customarily use the axes of a Cartesian coordinate system,
specifying the x, y, and z components of the vector.
When this is done, any vector in the space is uniquely defined by specifying
the coordinate axes, and the vector's components along those axes, (x,y,z).
Sometimes it's even useful to have a set of coordinate axes
that are not orthogonal (perpendicular), so long as the axes "span the space". Coordinate axes span the space if we are able to uniquely represent any vector in the space by its components along those axes.
This requirement is met if the three axes do not all
lie in the same plane and no two are collinear.
- The physical effect of a vector quantity (for example, a force) nearly always depends on its line of action, that is, where it acts on a body. But for the process of summing vectors, the vectors may be treated as free vectors, that is, they may be freely moved parallel to their original position, maintaining their size and direction.
|Components on Cartesian axes.|
- Usually vectors' tails don't happen to lie on a coordinate axis. In that case
a vector's componenets are found by the following procedure.
Drop two perpendiculars to the axis, from the vector's head and
from its tail. Then the vector's component on that axis is the length between the feet of these perpendiculars. The illustration shows vectors lying
in an x,y plane, but the same principle is used for three
- The components of vectors are signed scalar numbers (they may
have positive or negative sign). Subtract the tail projection
value from the head projection value to get that signed number.
- If the components of two vectors are
and (x2,y2,z2) then the components of the
sum of these vectors is
(x1+x2, y1+y2, z1+z2).
- The product of a vector and a scalar, Vs, is a vector of size
|V|s. It has the same direction as V.
- Two kinds of vector products are useful in physics, and these will
be defined later when we need them.
- In elementary courses there's no definition that allows dividing anything by a vector quantity.
- Vs = sV (Associative law.)
- (A + B + C)s = As + Bs + Cs (Distributive law.)
Kinematics is the geometry of motion. Kinematics describes motion, without
using Newton's laws, and without using the concepts of force and mass.
- Physical quantities of different kinds are distinguished by
unit and dimension labels. These two terms are not synonyms.
See my glossary of physics terms.
Quantities with different dimensions are never added.
Quantities with different units are never added.
Vector and scalar quantities cannot be added together.
The dimensions of length, mass and time are designated L, M, and T.
The units of length, mass and time in the metric system are meters, kilogram and second in the MKS system. They are centimeter, gram and second in the cgs system. They are usually abbreviated, m, kg, s, and cm, g, s.
- Fundamental measurables are those that are not defined by
equations, but are defined by specifying an operation (a measurement procedure) to determine
their size. These are scalar quantities. Length,
mass, and time are the three fundamental measurables of mechanics. In the
study of thermal physics, optics and electricity, additional fundamental
measurements are used.
- Displacement is the vector representing the relative position
of two points in space.
Its size is the distance between the points; its direction
is the direction of the line segment joining the points. When we speak
of displacement of a body during a time interval, the displacement is a vector drawn
from the earlier (initial) position to the later (final) position of that body.
- The velocity of a body is defined as
is a the displacement vector during a very small time interval
Dt. The calculus definition of velocity is v = limΔt→0(Δx/Δt).
Remember Δx is a vector.
- The acceleration of a body is defined as
a = Δv/Δt,
is the change in velocity during a very small time interval
Dt. The calculus definition of acceleration is a = limΔt→0(Δv/Δt).
Remember, Δv is a vector.
Statics considers non-moving systems, where the net force
on each and every part of the system is zero and the net
torque on each and every part of the system is zero. Many of the
results of statics can also be applied to systems in which
every part moves at the same constant velocity. But systems
in which any part rotates do not qualify as static systems,
and we cannot apply the laws for static systems to them.
- An inertial reference frame is a coordinate system that is not accelerating.
- A static system is one in which all parts of a system are at
rest relative to an inertial frame of
reference. Statics is the physics of such systems.
|Finding the torque due to a force on a body.|
- Torque expresses the ability of force to cause rotation of
a body about a chosen particular axis of rotation.
While torque is a vector quantity, it may be treated as a signed scalar
when all the forces of concern lie in a common plane, as is the case with
rotating wheels. The figure shows such a body, of arbitrary shape.
We wish to know the effect of force F
to cause rotation about the axis marked
× labeled "Center of torques".
P is the point of application of force on the body.
Extend a "line of action" along the force vector far enough to allow
drawing a line perpendicular to it and passing through the center of
torques. The length of this last line (L) is called the "lever arm" of the
force. The size of the torque about this center of torques is then
FL. If the force is such that it alone would rotate the body clockwise
around the center of torques, then we assign it a negative sign. If it alone
would rotate the body counter-clockwise, we assign it a positive sign.
- The torque concept is useful in both static and dynamic systems, but
in static systems nothing rotates, so the net torque on each part of
the system is zero.
- When the forces on a body do not lie in a single plane,
we must treat torque as a vector. The definition is as above in the
plane that contains the vector and the chosen center of torques.
The vector representing the torque is along a line perpendicular to that
plane and passing through the center of torques. If, as you look at this
plane, the torque would produce clockwise rotation, then the torque vector
is pointing away from you. If the torque would produce counter-clockwise
rotation, the torque vector points toward you. The right hand rule
is a useful mnemonic for getting this right. Curl your fingers around the
line of the torque, with your fingers curling in the direction of the
rotation, then your thumb points in the direction of the torque.
- When a body is at rest, the net force on the body is zero and
the net torque on the body is also zero. Furthermore, the net torque
on the body is zero no matter what center of torques is chosen. That
simply means that the body does not rotate about any axis. It
isn't rotating at all.
- Putting this another way: If the net torque on a body is zero about
an axis, it will also be zero about any other chosen axis,
even an axis that does not even pass through the body.
- The center of mass of an extended body is an imaginary
(but useful) point defined this way. It is the point through which
any plane you draw will split the body into two parts of equal mass.
- The center of gravity of an extended body is that point
through which any plane you draw will split the body into two parts
of equal weight in a gravitational field.
Dynamics deals with Newton's laws and their consequences.
- Mass is an intrinsic property of a body. Gravitational mass is
measured by determining the gravitational force on a body relative to a
mass standard. Inertial mass is determined and defined by Newton's
second law (see below). So far as we know, the gravitational and inertial mass
of a body are the same size.
- An inertial frame of reference (coordinate system) is one that
isn't accelerating. That means the reference frame is either at rest
or moving with constant velocity. More specifically, an inertial frame is a system in which Newton's law F = ma applies, where
F is the sum of all real forces acting
on the body of mass m (see below).
- Dynamics is the study of systems in which some parts of the
system accelerate. Newton's F = ma applies
to each part of such systems.
- For our present purposes, we define real forces to be of two
kinds: (a) forces at points where bodies contact, the force being due to elastic deformation at the point or surface of contact,
(b) gravitational, electric, magnetic and nuclear forces that can
act on bodies that aren't in contact (acting "at a distance"). The term "real force"
does not include fictitious forces, which are a mathematical construct
that is useful when doing problems in accelerated coordinate systems. Fictitious forces include centrifugal and Coriolis forces.
- The net force on a body is the vector sum
of all real forces acting on that body.
This statement assumes the problem is being done in an inertial frame
of reference. A common error is to include forces acting on other bodies when taking the sum.
- When the net force on a body is zero: (1) A body at
rest will remain at rest. (2) A moving body continues to move with constant
speed in a straight line. This is Newton's first law.
- If F is the net real force on a body and
a is the acceleration of the center of mass of that body.
Then F = ma. (This equation is equivalent to Newton's first and second laws in the special case where the mass doesn't change.)
- Newton's first two laws are embodied in this more general statement:
The rate of change of momentum of a body is proportional to the net force
acting on it. In equation form, F = d(mv)/dt. This is
a vector equation, and applies even in those cases where the mass may
- Newton's third law: If body A exerts a force on body B,
then B exerts an equal size and oppositely directed force on A. It can be written: FAB = -
Remember that forces are vectors. The two forces of Newton's third law
act on different bodies.
- To avoid errors in application of Newton's laws (above) it helps to
mentally "isolate each part of the system" then tally all real forces
acting on that part of the system (and only that part)
before applying Newton's F = ma to it.
Many forces acting on one part of the system are due to contact with other
parts of the system (or to "action at a distance" forces). Newton's third
law applies to them, but one must be careful not to add forces acting on other
parts when summing the forces on one part. Newton's laws apply to each part
of the system, any grouping of parts, or the entire system.
- When several parts of a system are grouped together as a single 'system'
for the purpose of applying Newton's second law,
the net force on that system
need not include any of the internal forces within the system,
since they sum to zero according to Newton's third law. Some books
write Newton's second law for a system of mass m
as Fnet, external = ma for
- Friction acts in a direction to oppose slipping or sliding motion of
the bodies at their contact surface.
Forces due to friction are tangent to the contact surface where
the bodies touch.
Note: The short phrase "friction opposes motion" should not be
used for it can be misleading.
The force due to friction exerted by the floor on your feet acts in
the direction of your walking motion.
This force due to friction on your feet helps prevent
your feet slipping backward, as they might on ice.
The force due to friction of the pavement acting on your automobile's
wheels is forward, in the direction of the auto's motion.
Forces due to friction and forces due to rolling resistance
(see next item) are the only forces that
sustain your car's forward motion. Friction relates
to two surfaces either at rest or sliding. Rolling resistance
is quite another matter. See next item.
- Rolling resistance results from elastic or non-elastic deformations of
bodies in contact, even if they are not slipping or sliding.
To illustrate the difference
between rolling resistance and friction, consider this example. A ball or
cylinder rolls on an infinite horizontal plane. Will it roll forever? No. Why?
But elastic forces associated with deformations at the contact surface
are not parallel to the plane, and they may not be
quite perpendicular to the plane. They can provide the torque in the correct
sense to slow the ball's rotation provide the vertical force to balance
the weight, and a horizontal force to slow the forward motion of the center of
- Not simply because of friction. Friction is, by definition, a force acting
tangent to the surface. Friction would act parallel to the plane in this situation.
- If such a tangential force acted opposite to the motion of the ball,
this would decrease the ball's forward velocity, but
it would have a torque that would be in a direction to increase
the angular velocity of the ball. This is contradictory.
- If a tangential force were in a direction to decrease the angular
velocity it would have a torque that would increase
the forward linear velocity. Likewise contradictory.
- Both friction and rolling resistance are usually present where two bodies
are in contact. We sometimes
carelessly combine their effects and call it "friction". Sometimes we can
get by with that without introducing errors, but we should be more careful.
See next item.
- Two bodies that are in frictionless contact and do not deform at the
points of contact exert forces on each
other that are normal (perpendicular) to the surfaces at the point of contact.
This also applies to frictionless rollers and ball bearings.
(However, if the surfaces deform,
there can be tangential force components of "rolling resistance"
even in the frictionless case.)
- All materials deform somewhat under load. Perfectly rigid bodies are
impossible in nature, for if they collided, reaction forces at the point of
contact would be infinite in size and of infinitesimal duration.
- When the net torque on a body is zero: (1) A body that is not rotating
will continue without rotation. (2) A body that is rotating will continue
rotating with the same angular speed.
- If the net (total) force on a body has zero component in one direction,
the body will not move in that direction.
- If the vector sum of all forces and the sum of all torques acting
on a body is zero then that body is not accelerating.
- The work done by a force acting on a body is the product of the
force's component in the direction of motion
multiplied by the distance the body moves in that direction.
If a force is perpendicular to the direction
of motion, that force does no work on the body.
If a force acts on a body, and
there's no motion of the body, that force does no work.
Work is a scalar quantity.
- A closed system is one in on which no outside forces act.
Since no system is perfectly closed we relax this a bit. If none of
the external forces acting on the system affect it significantly,
we may treat it as closed. If we can determine (measure or predict)
the effect of an
external force on the system we can separate that effect from the
others we are interested in or "correct" for it.
- It follows that a closed system does not take in or eject any mass. It has no input of
energy, no input work and does not do any net work on anthing external to it.
Also, its net momentum and angular momentum remains constant. It is very difficult to prevent a machine from outputing energy in the form of thermal energy (colloquially called "heat"), so if it does emit thermal energy, it is not fully "closed".
- Usually when we use the term "component of a vector" we are talking
about a scalar, the projection of the vector onto a line. But sometimes
it's useful to speak of "vector components of a vector". The vector components
of a vector V are any set of vectors that sum to that
vector V. The vector components of a vector
V acting together are physically equivalent to the
single vector V. In motion problems with complicated
paths, it's useful to speak of a vector's radial and tangential components.
- The momentum of a body is the product of its mass and velocity,
and is therefore a vector quantity. P = mv.
- Conservation of momentum.
The net momentum of the parts of a closed system remains constant over time,
no matter what's going on within the system.
This follows from Newton's third law.
- The centripetal force acting
on a rotating body is simply the radial component of the net force acting
on that body. The radial component of a force is the vector component in the
direction of a line drawn from the body to the center of rotation.
The centripetal force
is not a "new" force to be added to the real forces when finding the net force.
Nor is it a fictitious force. It is a real vector component of real forces.
We are here assuming the analysis is being carried out in an inertial coordinate
system. In advanced courses one sometimes does such problems in a rotating
coordinate system. Such systems are non-inertial, so Newton's second law for
real forces does not apply. However, if one introduces "fictitious" forces such
as the centrifugal force, Coriolis force, etc. into the sum of forces, one
can use F = ma, with F including the
real and fictitious forces. This neat trick is
not recommended for beginners.
- All material bodies are at least somewhat deformable.
- No physical influence can propagate through space between two different
(Nothing can be in different places at the same time.)
- Infnite forces are not allowed. In fact, infinite values are not allowed for any physical quantity.
- Information cannot travel instaneously between two displaced points.
Some useful mathematics.
- The scalar product (sometimes called the "dot" product) of two vectors is defined by A•B = |A||B| cosθ where θ is the angle between A and B. As its name indicates, the scalar product is a scalar quantity. The dot product appears in the definition of work, W = F•x.
- (A + B + C)•V = A•V + B•V + C•V (Distributive law.)
|The geometry of the cross product.|
- The cross product of two vectors a and b is defined by |a×b| = |a||b| sinθ where θ is the angle between a and b. This gives the size of the cross product. The cross product is a vector quantity. Its direction is perpendicular to the plane of a and b in a direction given by the right hand rule: Curl the fingers of the right hand from a to b, then the cross product is in the direction your thumb points. The cross product appears in the definition of torque, τ = R×F.
In the diagram, a and b lie in a plane (of course), and a×b is a vector perpendicular to that plane. The size of the cross product, |a×b|, is the same as the size of the area of the shaded rectangle in the plane of a and b.
- A×B = - B×A (Anti-associative law!)
- (A + B + C)×V = A×V + B×V + C×V (Distributive law.) But, the ordering of factors is important; for the cross product is anti-associative.
- Infinity is not a number, and should not appear as if it were a number in any valid physical equations. Never write ∞×0, for example. It is ok to label limiting processes in the form x→∞, but special care must be taken to interpret that properly.
- Newton's law of gravitation. Two spherical bodies of masses
m1 and m2 whose centers are
a distance R apart exert forces of attraction on each other of size
F = Gm1m2 / R2. G
is the universal gravitational constant.
- Gravitational potential energy is a convenient concept when dealing
with bodies or systems in a gravitational field. It allows one to treat the body
as "possessing" energy due to its position in the field, thereby avoiding the
need for including the gravitational source as part of the system.
The gravitational potential energy of a body at a certain position is just the
work that must be done on the body to move it to that position from some
fixed reference position. Gravitational potential is the potential energy of a body
divided by the mass of the body.
- The work done in moving a body from point A to B in a gravitational field
is independent of the path along which it moves between A and B.
The potential energy difference between those points is independent of the path.
It follows that the potential energy change around any closed loop path is zero.
These characterize the field as energy conservative.
- Forms of Energy. Energy can be present in many forms:
kinetic, thermal, nuclear, various kinds of potential energy and others.
But there are only two classes these fall into: kinetic and potential.
Kinetic energy is due to motion of mass. Potential energy is due to the
geometric arrangements of masses relative to the forces they exert upon each other.
- Conservation of Energy. When all
forms of energy are accounted for and measured,
the total energy of a closed system remains constant over time.
- Conservation of Linear Momentum. There is one form of linear momentum, mv. The vector sum of all momenta in the system is a conserved quantity.
- Conservation of Angular Momentum. There is one form of linear momentum, mr×v, where r is the vector distance of the mass m from a fixed point. The symbol "×" represents the vector cross product of the vector quantities it stands between. The vector sum of all momenta in the system is a conserved quantity.
In the following principles we use the phrase "connectable points" to mean
two points in the liquid that can be connected by an unbroken line
drawn between them that passes only through the liquid.
These principles apply only to static or quasi-static systems,
where the system has achieved static equilibrium.
Some of these principles (those that mention "level" or "height")
implicitly assume an experiment done on earth, or in
a gravitational field where these words have specific meaning with
respect to the direction of the field.
- Solids undisturbed preserve their shape. Liquids flow to an equilibrium condition that conforms to the
shape of their container. Solids have rigidity and elasticity, but can
be deformed by compression, stretching and shear. Most liquids
strongly resist being compressed, and to a first approximation may be
considered to maintain constant volume over a wide range of pressures.
- Liquids "seek their own level". This means that when they reach equilibrium their boundary is their container and their upper surface. Their upper surface is level. More precisely, any two connectable points on the air/liquid surface of a liquid at rest are at the same height.
- Liquid pressure is a scalar quantity. Liquid contacting a solid surface
is responsible for a force on that surface. The force is perpendicular to the
surface and of size F = PA where P
is the liquid pressure and A is the area of
the surface. The area must be small enough that the pressure is essentially
constant over the surface. This is, in fact, the definition of pressure,
which is, in calculus form, P = dF/dA.
- Pressure at a given point in a liquid exerts force on an infinitesimal
area at that point, which is the same size no matter what the orientation
of that area. This is often abbreviated in a potentially misleading slogan
"Pressure acts equally in all directions." And "all directions" includes upwards.
|A continuous line may be drawn through the liquid,|
connecting points A and B. Therefore the pressure
difference between these points is
where r is the liquid density.
- Any two connectable points in a liquid are at the same pressure
if they are at the same height.
- The pressure difference between two connectable points in a liquid is
r is the liquid density,
g is the acceleration due to gravity
and H is the height difference between the two points.
- Pascal's principle. If the pressure changes at one point in a
liquid, it changes by the same amount at all other points
connectable to that point. The change is from one quasi-static situation to a different quasi-static situation.
- Archimedes' principle. If a body is floating on or immersed in
a liquid, the liquid exerts a net upward "buoyant" force due to the
pressure differences on the body's surfaces. Horizontal forces due to pressure
add to zero, but vertical components don't because of height differences.
Using the principles above, one can derive a simple result: The buoyant
force on the object is equal to the weight of the liquid the object has
displaced. Warning: There must be liquid underneath the body for there to
be an upward buoyant force on it. The buoyant force is due to pressure
differences; it isn't the result of any mythical "desire" of the water displaced
(pushed aside) to return to its original place.
Those are some of the elementary principles of mechanics.
Stick to these, and you won't go wrong.
Most mistakes students make are due to forgetting the
precise statement and qualifications of some of these principles.
If I have misrepresented standard classical physics in any of the above, I would appreciate being informed of it so I can correct it.
Latest revision November, 2012.
Return to front page.
Return to the top of this document.
Return to The Museum's Main Gallery. | http://www.lhup.edu/~DSIMANEK/museum/phys101.htm | 13 |
108 | From Math Images
|Real Life Parabolas|
Basic DescriptionTwo methods for finding parabolic area exist. One is very accurate and the other is more of an approximation method. This procedure for approximation is known as the Rectangle Method and is used by finding the area of rectangles that can fit in the parabola. The area of these rectangles are added together, giving you the approximate area under or above the parabola.
For a detailed overview of parabolas, see the page, Parabola. However, we will provide a brief summary and description of parabolas below before explaining how to find the area beneath or above one.
You may recall first learning about parabolas and your teacher telling you that it is a curve in the shape of a "u" and can be oriented to open upwards, downwards, sideways, or diagonally. To be a little more mathematical, a parabola is a conic section formed by the intersection of a cone and a plane. Below is an image illustrating this.
When you were first introduced to parabolas, you learned that the quadratic equation, is its algebraic representation (where and are the coordinates of the vertex and and are the coordinates of an arbitrary point on the parabola.
As you progressed in mathematics, you learned how to find the area of the space enclosed by the parabola. This can be accomplished in two different ways:
- Using Definite Integration
- Using the Rectangle method (Also referred to as finding the Riemann Sum)
We will explain both of these approaches by posing a problem and then solving it step by step. But first we are going to familiarize you with some parabolic architecture and occurrences found in the real world.
A More Mathematical Explanation
Integral Approach to Determining AreaTypically, when attempting to find the area underneath a pa [...]
Integral Approach to Determining Area
Typically, when attempting to find the area underneath a parabola, we take its integral. Below is a proposed problem with a numbered procedure of individual steps for completion:
Find the area under the curve between and and the .
1.Graphing the function first will help you to visualize the curve. Below I have graphed the function using the mathematical program Derive, but you can easily graph it either using your calculator or by hand.
2.Now we will algebraically evaluate the expression by taking its integral; doing so will give us an EXACT area. Integration is shown and calculated by:
3.So for this particular problem, our bounds ( and ) will be and respectfully and our integral will look like:
4. Now we will integrate the function and then substitute the bounds for as follows
5. So, the area under the curve between and is .
Determining Area Using the Riemann Sum
Another procedure exists for finding the area under a curve, but it gives an approximation, not an exact value like the definite integral. This method is known as the Rectangle Method, more widely known as the Riemann Sum. To compute the Riemann sum, you divide the space enclosed by the parabola into rectangles and add together their areas. This method was used before integration was developed, mostly by the ancient Greeks. As we did above with definite integration, we will propose a problem accompanied by the procedure used for solving it.
Find the area under the curve between and for using the rectangle method.
1.To find the width of each of the 10 rectangles ( is the number of rectangles used to find the area under the curve) we will use the the formula,
where and are the boundaries and is the change in values
2.So for our particular curve, the above formula will look like this:
3.Therefore, the width of each of the 10 rectangles will be 0.2 units
4.Now that we have determined the width of each of the rectangles, we need to find each rectangle's individual height. To do this, we will use the equation of the curve:
5. Using the values above which represent the heights of the 10 rectangles under the parabola, we next multiply each height by the width determined earlier, . By doing this we are finding the area of each individual rectangle,
6 To determine the area, add together the area of the ten rectangles. When this is done, we have an area of . (This is the approximate area under the parabola)
The area using this method is slightly greater than the area determined using definite integration (only by approximately ). We will explain why this is so in the following section.
Integral Approach v. Rectangle Method
We have now used two different methods to determine the area enclosed by a parabola. The first, Integration, resulted in an EXACT area, whereas the second, The Rectangle Method, provided only an approximation, meaning that the value could be larger or smaller than the actual area found using definite integration.
With this specific problem, we got two very numerically close values for the area of the parabola. Using definite integration, we found an area of , but with the rectangle method, an area of . If you look back to the image of the parabola with the ten green rectangles, you will notice that the tops of the rectangles extend over the top and sides of the parabola. This accounts for the larger area value and is known as the left Riemann Sum. If the rectangles would have been too short, we would call this taking the right Riemann Sum.
Real World Application
Within this section we are going to briefly introduce you to the magnificent Golden Gate Bridge (see the main image of this page) and then derive an equation that will represent the main parabolic suspension cable. Lastly, we will find the area under this suspension cable using integration.
The Golden Gate Bridge, located in San Francisco, California is a very well known parabolic suspension bridge, being one of the longest suspension bridge in America as well as a modern wonder of the world. (The longest suspension bridge in America as well as in the entire Western hemisphere is the Mackinac Bridge located in Michigan. It spans 26,372 ft, almost 5 miles! ) The Golden Gate spans the San Francisco Bay into the Pacific Ocean and was completed in 1937. Below we have included dimensions of the bridge that will be useful in determining the parabolic equation of its main suspension cables:
- The length of the main span is 4,200 ft (1,280 m)
- The height of the towers from the roadway to the cable is 500 ft (152 m)
Using these measurements, we are able to find that the parabolic equation for the parabolic main suspension cable is:
This may seem a little overwhelming in the sense that I haven't provided a step by step procedure for this equation, but we will do so in order to ensure you understand!
Okay, there are two possible forms for writing a parabolic equation. The first is standard form and the second is vertex form. We are going to choose to use the vertex form and then later convert it into standard form. For this form we are going to need two sets of points, a vertex and another point on the parabola. So, we know the vertex form of a parabola is represented by , where and are the coordinates of the vertex and the points and are the coordinates of the other point.
So now you are probably wondering what our vertex here could be, right? Well, we are making the vertex and the other known point . We want to first direct your attention to the image pictured below: it shows the Golden Gate Bridge and the measurements we are using in our vertex form equation. As you can see, the length of the main suspension is and the height of the tower is .
For easier calculations it is best to divide the parabola in half symmetrically so that the bounds for taking the integral will be and . This way we can simple multiply the result by in order to account for the other portion. Doing this, we can make a vertex at and chose our point of intersection to be the top of the tower represented by . Now we have the two coordinate sets needed to plug into our vertex form equation. Shown below is the equation with the vertex substituted for and :
Now we are going to substitute the other point for and in the equation above, giving us:
Now that we have found the value of we can substitute it into the equation . Doing this will get us our desired parabolic equation:
The next step is to find the area under this curve and then multiply it by so to account for the other side of the parabola. To do this we are going to take the definite integral with a lower bound and an upper bound . As you will notice, we have changed into standard form. This explains why we have used the sign for approximation.
Next, rewrite using the upper and lower bounds after taking the integral:
Substitute the bounds into the already taken integral:
Multiply by to account for the other side of the parabola:
Now we know that the area under the main parabolic suspension cable of the Golden Gate bridge is . Remember, this is the area from the road, not from the water.
- There are currently no teaching materials for this page. Add teaching materials.
About the Bridge. Retrieved from http://www.mackinacbridge.org/about-the-bridge-8/
Books, P. "What Is the Difference Between a Parabola and a Catenary? (2011, April 27). Retrieved from http://journaleducation.net/classroom/mathematics/difference-parabola-catenary-3065.html
Bridge Design and Construction Statistics. Retrieved from http://www.goldengatebridge.org/research/factsGGBDesign.php
Bourne, M. The Area under a Curve. Retrieved from http://www.intmath.com/integration/3-area-under-curve.php
Calvert, J.B. Parabola. (2002, May 3). Retrieved from http://mysite.du.edu/~jcalvert/math/parabola.htm
Glydon, N. Suspension Bridges. Retrieved from http://mathcentral.uregina.ca/beyond/articles/Architecture/Bridges.html
Golden Gate Bridge. Retrieved from http://en.wikipedia.org/wiki/Golden_Gate_Bridge
Hague, C.H. "What shapes do roller coaster hills/dips follow? (parabolas? circles? etc.). (2003, July 23). Retrieved from http://www.madsci.org/posts/archives/2003-07/1059246836.Eg.r.html
Ian. Applications of Parabolas. (2000, October 24). Retrieved from http://mathforum.org/library/drmath/view/53224.html
Kukel, P. "Hanging With Galileo. Retrieved from http://whistleralley.com/hanging/hanging.htm
Parabolic Reflectors and the Ideal Flashlight. Retrieved from http://www.maplesoft.com/applications/view.aspx?SID=5523
Sandborg, D. Physics of Roller Coasters. Retrieved from http://cec.chebucto.org/Co-Phys.html
Suspension Bridges. Retrieved from http://carondelet.net/Family/Math/03210/page4.htm
Williams, S. Types of Parabolic Bridges. (2010, December 31) Retrieved from http://www.ehow.com/list_7712164_types-parabolic-bridges.html
Future Directions for this Page
The possibilities for further exploration and addition to this page are bountiful. Developing a section explaining the difference between a parabola and a catenary would be fantastic, as so often people confuse the two curves. Also, finding the area between the suspension cable and the road by use of the Riemann Sum would be very beneficial!
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | http://mathforum.org/mathimages/index.php/Parabolic_Integration | 13 |
57 | The t-test and analysis of variance (ANOVA) are widely used statistical methods to compare group means. For example, the independent sample t-test enables you to compare annual personal income between rural and urban areas and examine the difference in the grade point average (GPA) between male and female students. Using the paired t-test, you can also compare the change in outcomes before and after a treatment is applied.
In a t-test, the mean of a variable to be compared should be substantively interpretable. Technically, the left-hand side (LHS) variable to be tested should be interval or ratio scaled (continuous), whereas the right-hand side (RHS) variable should be binary (categorical). The t-test can also compare the proportions of binary variables. The mean of a binary variable is the proportion or percentage of success of the variable. When a sample size is large, the t-test and z-test for comparing proportions produce almost the same answer.
T-tests assume random sampling and population normality. When two samples have the same population variance, the independent samples t-test uses the pooled variance. Otherwise, individual variances need to be used in the denominator and approximation of the degrees of freedom.
T-tests assume that samples are randomly drawn from normally distributed populations with unknown population variances. The variables of interest should be random variables, whose values change randomly. A constant such as the number of parents of a person is not a random variable. In addition, the occurrence of one measurement in a variable should be independent of the occurrence of others. In other word, the occurrence of an event does not change the probability that other events occur. This property is called statistical independence. Time series data are likely to be statistically dependent because they are often autocorrelated.
T-tests assume random sampling without any selection bias. If a researcher intentionally selects some samples with properties that he prefers and then compares them with other samples, his inferences based on this non-random sampling are neither reliable nor generalized. In an experiment, a subject should be randomly assigned to either the control or treated group so that two groups do not have any systematic difference except for the treatment applied. When subjects can decide to participate or not (non-random assignment), however, the independent sample t-test may under- or over-estimate the difference between the control and treated groups. In this case of self-selection, the propensity score matching and treatment effect model may produce robust and reliable estimates of the mean differences.
Another key assumption is population normality. If this assumption is violated, a sample mean is no longer the best measure (unbiased estimator) of central tendency and t-tests will not be valid. Figure 1 illustrates the standard normal probability distribution on the left and a bimodal distribution on the right. Even if the two distributions have the same mean and variance, we cannot say much about their mean difference.
Figure 1. Comparing the Standard Normal and a Bimodal Probability Distributions
The violation of normality becomes more problematic in the one-tailed test than the two-tailed one (Hildebrand et al. 2005: 329). Figure 2 shows how the violation influences statistical inferences. The red curve indicates the standard normal probability distribution with its 1 percent one-tailed rejection areas on the left. The blue one is for a non-normal distribution with the blue 1 percent rejection area. The test statistic indicated by a vertical green line falls in the rejection area of the skewed non-normal distribution but does not in the red shaded area of the standard normal distribution. If the populations follow such a non-normal distribution, the one-tailed t-test based on the normality does not mistakenly reject the null hypothesis.
Figure 2. Inferential Fallacy When the Normality Assumption Is Violated
Due to the Central Limit Theorem, the normality assumption is not as problematic as imagined in the real world. The Theorem says that the distribution of a sample mean (e.g., and ) is approximately normal when its sample size is sufficiently large. When n1 + n2 >=30, in practice, you do not need to worry too much about the normality assumption.
When a sample size is small and normality is questionable, you might draw a histogram, P-P plot, and Q-Q plots or conduct the Shapiro-Wilk W (N<=2000), Shapiro-Francia W (N<=5000), Kolmogorov-Smirnov D (N>2000), and Jarque-Bera tests. If the normality assumption is violated, you might try such nonparametric methods as the Kolmogorov-Smirnov test, Kruscal-Wallis test, or Wilcoxon Rank-Sum Test, depending on the circumstances.
T-tests can be conducted on a one sample, paired samples, and independent samples. The one sample t-test checks if the population mean is different from a hypothesized value (oftentimes zero). If two samples are taken from different populations and their elements are not paired, the independent sample t-test compares the means of two samples (e.g., GPA of male and female students). In paired samples, individual differences of matched pairs (e.g., pre and post measurements) are examined.
While the independent sample t-test is limited to comparing the means of two groups, the one-way ANOVA (Analysis of Variance) can compare more than two groups. Therefore, the t-test is considered a special case of the one-way ANOVA. These analyses do not, however, necessarily imply any causal relationship between the left-hand and right-hand side variables. The F statistic of ANOVA is t squared (t2) when the degrees of freedom is only one. Whether data are balanced or not does not matter in the t-test and one-way ANOVA. Table 1 compares the independent sample t-test and one-way ANOVA.
Table 1. Comparison between the Independent Sample T-test and One-way ANOVA
|Independent Sample T-test||One-way ANOVA|
|LHS (Dependent)||Interval or ratio variable||Interval or ratio variable|
|RHS (Independent)||Binary variable||Categorical variable|
|Null Hypothesis||mu1=mu2||mu1=mu2=mu3 ...|
|Probability Distribution||T Distribution||F distribution|
Stata has the .ttest (or .ttesti) command to conduct t-tests. The .anova and .oneway commands perform the one-way ANOVA. Stata also has the .prtest (or .prtesti) command to compare the proportions of binary variables. The .ttesti and .prtesti commands are useful when only aggregated information (i.e., the number of observations, means or proportions, and standard deviations) is available.
In SAS, the TTEST procedure conducts various t-tests and the UNIVARIATE and MEANS procedures have options for the one sample t-test. SAS also has the ANOVA, GLM, and MIXED procedures for ANOVA. The ANOVA procedure can handle balanced data only, while the GLM and MIXED can analyze both balanced and unbalanced data. However, unbalanced data having different numbers of observations across groups does not cause any problem in t-tests and the one-way ANOVA.
SPSS has T-TEST, ONEWAY, GLM (or UNIANOVA), and MIXED commands for t-tests and one-way ANOVA. Table 2 summarizes related Stata commands, SAS procedures, and SPSS commands.
Table 2. Related Procedures and Commands in Stata, SAS, and SPSS
|STATA 10 SE||SAS 9.1||SPSS 15|
|Normality Test||.swilk;. sfrancia||UNIVARIATE||EXAMINE|
|Equal Variance Test||.oneway||TTEST||T-TEST|
|Nonparametric Method||.ksmirnov; .kwallis||NPAR1WAY||NPARTESTS|
|Comparing Means (T-test)||.ttest; .ttesti||TTEST; MEANS; ANOVA||T-TEST|
|GLM*||GLM; MIXED||GLM; MIXED|
|Comparing Proportions||.prtest; prtesti||(point-and-click)|
Figure 3 contrasts two types of data arrangement for t-tests. The first data arrangement has a variable to be tested and a grouping variable to classify groups (0 or 1). The second, appropriate especially for paired samples, has two variables to be tested. The two variables in this type are not, however, necessarily paired.
SAS and SPSS require the first data arrangement for the independent sample t-test, whereas Stata can handle both types flexibly. The second arrangement is required for the paired sample t-test in these software packages. Notice that the numbers of observations across groups are not necessarily equal (balanced).
Figure 3. Two Types of Data Arrangement
|Data Arrangement I||Data Arrangement II|
The data set used here is adopted from J. F. Fraumeniís study on cigarette smoking and cancer (Fraumeni 1968). The data are per capita numbers of cigarettes sold by 43 states and the District of Columbia in 1960 together with death rates per hundred thousand people from various forms of cancers. Two variables were added to categorize states into two groups. See the appendix for the details. | http://www.indiana.edu/~statmath/stat/all/ttest/ttest1.html | 13 |
104 | Question: How do you teach multiplication and division?
Answer: Questions about the way Investigations teaches multiplication and division are commonplace. Where are the flash cards? What are landmark numbers? I could use some help developing a better understanding of cluster problems. Why do children work on multiplication and division together? In fact, over the past several months we have received questions on many different aspects of teaching multiplication and division in Investigations. We decided to respond to as many of them as we could by laying out the way that Investigations approaches these topics through the grades.
Things that Come in Groups
In grades K through three students begin to work on multiplication and division by looking at contexts in which things come in groups, or in which a group of things needs to be shared. In the primary grades this involves solving problems like how many hands or fingers in our classroom, and counting things by groups such as 2's, 5's, and 10's. By third grade, students more systematically explore things that come in groups, from 2's to 12's, making lists of these things and then using them to compose multiplication and division story problems and equations. A student might be working on 4's multiplication facts, for instance, and choose something from the 4's list to make a multiplication story with. Such a story might look something like this:
This kind of work leads students toward representing these situations with notation. For instance, a student might represent this problem as either:
4 x 6 = 24 or 6 x 4 = 24
This is an opportunity for students to see that changing the order of the numbers does not change the answer.
Another focus in learning multiplication and division in Investigations is on skip counting. This is a practice that is familiar to all of us from elementary school. Indeed, we are familiar still with the chants of "five, ten, fifteen, twenty, twenty-five" that helped us to remember how to count by fives. However, skip counting in Investigations goes deeper than simply chanting numbers, and students work with numbers which have much more complex patterns than those found in counting by 2's or 5's or 10's.
Skip counting is done through a variety of activities in Investigations. In the Ten-Minute Math activity, Counting Around the Classroom each student says one number in the count while the teacher records the list. When complete, students use this list to identify patterns found when counting by a particular number and then use their knowledge of these patterns to become familiar with multiplication and division facts for that number. Students skip count on the hundreds chart (with which they've become familiar in first grade) and again identify patterns in multiples, but extend this further when they create equations to represent how skip counting by a number got them to a multiple. For example, when counting by 4's they will color in 4, 8, 12, 16, 20, 24, etc. and then identify patterns in those multiples. A student might see that the pattern in the ones place goes "4, 8, 2, 6, 0, 4" and then continues to repeat itself. Students will then make multiplication and division equations that represent this counting such as 2 x 4 = 8 or 8 ÷ 2 = 4. In addition to this, students use hundreds charts to look for patterns in multiples, and relationships between the multiples of one number and those of another (3 and 6, for instance) to help them as they learn facts.
Skip counting and the work that accompanies it (identifying patterns, creating equations to represent relationships between factors and multiples) is a significant part of learning multiplication and division in the curriculum. It is a strategy that students often use to find a multiple when they don't yet have it memorized and it is another sound method for practice.
An important model for multiplication and division in the Investigations curriculum is the array model. In working with this geometric, or area, model, students use their knowledge of number relationships, and of the operations of multiplication and division, to further their understanding and to develop efficient strategies for solving problems. In second grade students first encounter arrays as they use tiles to make and explore rectangles. In third grade, they begin by finding as many ways as they can to arrange 18 chairs in equal rows. For example:
These are all of the possible arrays that students can come up with, but they will often do things like create two of the same array, such as the 2 x 9 and the 9 x 2 as well, believing that these are two different ways to make 18. This is an opportunity to talk about congruence as well as the commutative property.
Students use graph paper to create a larger set of arrays for many other numbers. When they create arrays for numbers such as 11, 19, etc. they recognize that these arrays can only be made in one way -- and that the only factors are one and the number itself. This is an opportunity to explore prime numbers. When students recognize that there are some array sets (all of the arrays for one number) that have squares in them (16, 25), as well as an odd number of factors, this is an opportunity for them to become familiar with square numbers.
Students identify arrays in their world (i.e. cans in a six pack, tiles on a floor, panes in a window, etc.) and create number equations (similar to those in the 4's example above) to represent these. They make a set of array cards, identifying the dimensions of these rectangles and the total number of square space (area) in the rectangle. They consider the relationship between the dimensions and the total, and this is a place in which the relationship between multiplication and division becomes particularly explicit. Seeing that a total is created when two dimensions (factors) are in place can help children to understand that a total (multiple) can be broken apart into equal groups using those two dimensions again. For instance, 2 x 9 = 18 and 18 ÷ 2 = 9. The factor pairs are evident in both equations, but the action that is taking place is different. Using multiplication to solve division problems is a strategy that we want children to become familiar with.
Students in grade four play games that support their understanding of the properties of multiplication and division and the relationship between these operations (games such as Small Array/Big Array, Multiplication Pairs, Count and Compare) and that, ultimately, help them to learn facts.
You might think of the array cards as a set of "conceptual" flash cards. Indeed, the array cards are used to practice learning facts at the same time that students are seeing how the arrays (multiples) can be broken up into pieces (factors and factor pairs; focus on the distributive property) and put back together again.
The work with arrays is a central part of learning multiplication and division in Investigations. It is focused on several big ideas that are significant to understanding the operations, as well as how to use understanding of number relationships and operations when solving multiplication and division problems. These big ideas include:
- understanding the properties of multiplication (commutative, distributive)
- learning about the relationship between multiplication and division
- learning number facts, and
- developing strategies for solving multiplication and division problems with fluency (efficiency, accuracy, and flexibility)
A significant part of the grade four and five multiplication/division curriculum is work with Cluster Problems. These are sets of problems that are used to support understanding in two different ways. First, they help students think about how to start solving a problem that s/he might perceive as difficult. Clusters do this by including several smaller, but related, problems that allow students access to a solution for a more complex problem. Second, Clusters offer an opportunity for students to see, use, and make sense of the distributive property. In other words, they allow students to see the way in which a problem can be broken up (products distributed) into parts that are easier to work with [i.e. 23 x 5 = (20 x 5) + (3 x 5)]. Both of these objectives support the end goal which is to help students develop computational fluency --accuracy, efficiency, and flexibility in solving problems. (For more on this definition of fluency, see the article by Susan Jo Russell at http://investigations.terc.edu/library/families/comp_fluency.cfm.)
For example, a cluster for 23 x 5 might include the following:
3 x 5
20 x 5
10 x 5
23 x 5
Students can see how this problem can be broken into 20 x 5 and 3 x 5 and the products added together. Or a student might see that they could start solving the problem with any one of the problems in this set, thus "chunking" the problem into more manageable parts. Students are not expected to always solve every one of the problems in a cluster and/or to use every one to get to an answer. But this is an opportunity for them to consider how the problem can be broken into more manageable parts and to use what they know (10 x 5, for instance, equals 50, double that to get 20 x 5 = 100) to help them arrive at an accurate answer efficiently. Sometimes when working on clusters students find that there are other useful "starts" or problems for the cluster. For instance, in a problem like the one above, a student might find it easier to start with 25 x 5 or with 2 x 5. Students are encouraged to add those problems to the cluster.
Landmark numbers such as ten or multiples of ten, multiples of one hundred, and twenty-five are numbers that students are familiar with through their work in Investigations. As students become familiar with composing and decomposing these numbers they develop a foundation for operating with these numbers and are encouraged to do so when solving multiplication and division problems. Landmark numbers frequently appear in Cluster Problems (as is evident in the 20 x 5 and 10 x 5 problems in this cluster). (For more on the use of Landmark Numbers in Investigations, see the Ask an Author on this topic).
Story Problems/Problem Situations
All students using the Investigations curriculum in grades K-5 are working with story problems and problem situations that help them to better understand the four operations. These problems allow them to see the different situations that can be represented using the same operation and the same numbers.
For instance, consider the problems below that illustrate two different division situations:
How many groups? (partitioning)
There are 21 children in Lisa's party. If the children break into groups of three for a game, how many groups will there be?
This problem could be represented by an equation such as:
21 ÷ 3 = ?
- or -
3 x ? = 21
How many in each group? (sharing)
There are 21 children at Lisa's party. There are three small tables for the children to sit at. How many children will be at each table?
This problem could be represented by either of the above equations as well. However, each answer will represent a different outcome depending on the situation. In the first example, there will be 7 groups of 3 children; in the second, 3 tables of 7 children. (For more on these two different kinds of division, see the following Ask an Author.)
Students are also asked to create equations, similar to those above, from story problems that accurately reflect the situations and actions in the stories.
Throughout the curriculum at grades 3-5 there is a great deal of opportunity for students to practice as they develop an understanding and knowledge of multiplication and division facts. In addition to the games noted above, students write and solve Multiplication Riddles (in Things that Come in Groups) and solve problems in the Froggy Races activities. They play Cover 50, Multiplication BINGO and Division BINGO, games that support practice with multiplication as well as division facts. Through work with many of the Ten-Minute Math activities, students get regular practice with multiplication and division. Activities such as the Estimation Game, Nearest Answer, Counting Around the Class, and Broken Calculator give students practice with these operations and can be modified to meet the needs of individual students as they are learning facts. For instance, in a classroom where students are working on division of three-digit numbers, the Estimation Game might be played using the following equation:
78 ÷ 16 = _______
The students will see the problem for a few moments and then be asked to compute mentally to find an answer that is accurate, or close to accurate, using what they know about multiplication and division. This offers students the opportunity to develop efficient strategies for solving problems since they can't write an algorithm on paper. A student might use their knowledge of multiples of ten to deal with a big chunk of this problem. A strategy such as this one, when described, might sound like this:
"I know that ten times sixteen is 160. I have eighteen left-over. So one more sixteen and then two left-over. My answer to 178 ÷ 16 is eleven with 2 left."
Another student might have the same answer, but represent it differently. For instance, as 11 2/16 or 11 1/8.
Because the student was able to use knowledge of landmark numbers (multiples of ten) and has an understanding of the relationship between multiplication and division, they were able to deal with a big "chunk" of the problem quite rapidly, and to get to an answer accurately and efficiently.
It is important to note that a good deal of student practice with multiplication and division happens just as it does for adults -- when solving problems outside of the classroom or outside of the operations unit. When students in Investigations classrooms are working on data or geometry units, they have to use their knowledge and understanding of multiplication and division to solve problems in these strands as well.
Teaching Multiplication and Division Together
In Investigations, multiplication and division are often taught together. Below is an excerpt from the Teacher Note, "The Relationship Between Multiplication and Division" (p. 15 of Things that Come in Groups and p. 23 of Arrays and Shares) which describes this relationship and the benefits of teaching these two operations together.
Multiplication and division are related operations. Both involve two factors and the multiple created by multiplying those two factors. For example, here is a set of linked multiplication and division relationships:
8 x 3 = 24
24 ÷ 8 = 3
3 x 8 = 24
24 ÷ 3 = 8
Mathematics educators call all of these "multiplicative" situations because they all involve the relationship of factors and multiples. Many problem situations that your students will encounter can be described by either multiplication or division. For example:
"I bought a package of 24 treats for my dog. If I give her 3 treats every day, how many days will this package last?"
The elements of this problem are: 24 treats, 3 treats per day, and a number of days to be determined. This problem could be written in standard notation as either division or multiplication:
24 ÷ 3 = 8 or 3 x ____ = 24
As described above, students work towards fluency with multiplication and division facts and computation in a variety of ways. They develop a meaningful sense of operations and the actions they represent as they think about the context of things that come in groups, and solve story problems. They develop a visual image for multiplication, and for the "size" of various facts through the arrays, which also help them see and understand properties of multiplication and division (such as distributivity). Cluster problems also help students make such connections (12 x 7 = 10 X 7 + 2 x 7) and to use what they know to solve more difficult problems. Practice also comes in many forms -- multiplication and division games, story and cluster problems, bare number problems, Ten Minute Math activities, and regular classroom math activities. The real benefit is that all of these activities support both learning and practice.
Elizabeth Van Cleef and Megan Murray, TERC
Russell, Susan Jo. Relearning to Teach Arithmetic: Multiplication and Division. Dale Seymour Publications, 1999.
Tierney, Cornelia, Berle-Carman, Mary, Akers, Joan. Things That Come in Groups from the third grade Investigations in Number, Data, and Space curriculum. Scott Foresman Publications, 1998.
Economopoulos, Karen, Tierney, Cornelia, Russell, Susan Jo. Arrays and Shares from the fourth grade Investigations in Number, Data, and Space curriculum. Scott Foresman Publications, 1998. | http://investigations.terc.edu/library/curric-math/qa-1ed/teaching_mult_div.cfm | 13 |
93 | Copyright © 2006 jsd
In order to get some appreciation for the basic gas laws, let’s start by considering some simple experimental scenarios. We will measure the pressure, volume, temperature, etc. of gases in various situations. Different scenarios involve different choices of experimental conditions.
We start with a big tank, as shown in blue in figure 1. Within the tank are a number of gas particles (shown in black). In all scenarios considered here, we assume the gas can be satisfactorily approximated as an ideal gas. This is a reasonable approximation for gases such as air under everyday conditions.
Also, we choose to hold constant the number of gas particles (N). In most scenarios, we need not make any assumptions as to whether the particles are molecules and/or individual atoms.
We adopt the usual notation:
|N||≡||number of particles|
|N′||≡||number of moles of particles|
|R||≡||universal gas constant|
Boltzmann’s constant (k) is a universal constant that defines what we mean by one “degree” of temperature. It is the conversion factor that converts from degrees to conventional units of energy per particle. It is intimately related to the universal gas constant (R) which converts from degrees to conventional units of energy per mole.
|k||=||energy per particle|
|R||=||energy per mole|
That means we can write the ideal gas law in two ways:
In our first scenario, as indicated in figure 2, we choose to make the tank sturdy and leak-proof. That means the number of particles does not change, and the volume does not change.
We investigate the effect of temperature. We observe that as the temperature goes up (as indicated by the little thermometer icons), the pressure goes up. In figure 3, the point describing the system moves from point to point along a contour of constant volume. (This is also a contour of constant number of gas molecules, N, but the N variable is not shown in the graphs.)
This scenario can be described mathematically as follows:
Equation 2 is to be interpreted as follows: The ideal gas law (P V = N kT) is common to all ideal-gas scenarios. The other equations are much less general, and are valid only within this scenario.
The first line of equation 2 is a system of three equations that suffice to describe this scenario. The second line is a corollary of the first, and also suffices to describe this scenario. The first line has one general equation and two scenario-specific equations, while on the second line all three equations are scenario-specific.
For our next scenario, we choose a modified container, designed to keep the gas at constant pressure rather than constant volume. One part of the container is free to slide relative to the other part. There is a weight W on the top part, to supply the desired amount of pressure.
We observe that as the temperature increases, the volume increases. The point describing the system moves from point to point along a contour of constant pressure, as shown in figure 5.
This scenario can be described mathematically as follows:
Equation 3 is to be interpreted as follows: The ideal gas law (P V = N kT) is common to all ideal-gas scenarios. The other equations are much less general, and are valid only within this scenario.
The first line of equation 3 is a system of three equations that suffice to describe this scenario. The second line is a corollary of the first, and also suffices to describe this scenario.
For our next scenario, we arrange to have “no friction” (or very little), and “no heat leaks” (or very little).
To say the same thing more formally, we want to keep the entropy constant. If you don’t know what entropy is, don’t worry about it too much right now, and follow the scenario as described in the previous paragraph.
If the tank is large, this is relatively easy to arrange; we just need to make sure the experiment is done neither too quickly nor too slowly.
If the tank is not large and not thermally insulated, the window between “too quickly” and “too slowly” may be inconveniently small (or perhaps nonexistent). Conversely, if we make the tank large enough and provide reasonable thermal insulation, there will be plenty of time to change the volume with minimal dissipation, and plenty of time to make the desired measurements.
In this scenario, we observe that smaller volume is associated with higher pressure (and vice versa). Also, the gas cools as it expands (and heats up as it is compressed). The point describing the system moves from point to point along a contour of constant entropy, as shown in figure 9.
This scenario can be described mathematically as follows:
where γ (the Greek letter gamma) is the so-called adiabatic exponent. It is also called the ratio of specific heats, for reasons that need not concern us at the moment. The value of γ is 5/3 for a perfect monatomic gas, and 7/5 for diatomic rigid rotors. At ordinary temperatures, the observed value for air is very close to 1.4, as expected.
Many gases can be described by saying P V(γ) is some function of the entropy. This is usually a good approximation, provided the temperature doesn’t get too high or too low.
The first line of equation 4 is a system of four equations that suffice to describe this scenario. Actually we only need three of them; the ideal gas law remains valid but is not needed for present purposes. The second line is a corollary of the first, and also suffices to describe this scenario.
The equations grouped on the right side of equation 4 are much less general, and are valid only within this scenario.
For our next scenario, we arrange to keep the temperature constant. If the tank is large, this may require waiting a very long time for the temperatures to equilibrate. If you get tired of waiting, you can speed things up by pumping the gas through a heat exchanger, or otherwise engineering faster heat transfer.
In this isothermal scenario, we observe that smaller volume is associated with higher pressure and vice versa. The point describing the system moves from point to point along a contour of constant temperature, as shown in figure 9.
This scenario can be described mathematically as follows:
Once again, the ideal gas law (P V = N kT) is common to all ideal-gas scenarios. The other equations are much less general, and are valid only within this scenario.
The first line of equation 5 is a system of three equations that suffice to describe this scenario. The second line is a corollary of the first, and also suffices to describe this scenario.
The term osmosis covers both osmotic pressure (section 2.1 and section 2.4) and osmotic flow (section 2.3). We start by considering osmotic pressure, since it is simpler, and since it is what we observe under equilibrium conditions.
Executive summary: The solute behaves as a gas. It pressurizes the solvent from within. This solute-as-gas model is simple, explanatory, and predictive.
The easiest way to explain osmotic pressure is to connect it to Dalton’s law of partial pressures, as shown in figure 10.
For simplicity, we start by considering gas-in-gas osmosis. (The case of solute-in-solvent is essentially the same anyway, as discussed in section 2.4.)
The figure shows the equilibrium situation and explains it in terms of A = B + C. The rule is:
|total pressure = solvent pressure + solute pressure (6)|
Limitations: Part A of the diagram represents a measurement we can easily conduct in the lab, while the decomposition into B+C may be only a Gedankenexperiment. See section 2.7 for additional limitations on this equation.
The black particles (in both parts of the tank) have a pressure of 8 units, while the red particles (in the bottom half of the tank) have a pressure of 6 units, as shown by the pressure gauges attached to the side of the tank. Applying equation 6 to this example, we have:
The membrane in the middle of the vessel is permeable to the black particles but impermeable to the red particles.
We assume both the black gas and the red gas are ideal gases. That means (among other things) that the particles are very small compared to the mean free path. The figure exaggerates the size of the particles, so you will have to use your mind’s eye to imagine that the particles are very much smaller than shown.
The osmotic pressure (i.e. the differential pressure across the membrane) is simply equal to the partial pressure of the red particles, i.e. 6 units in this example.
In contrast, the partial pressure of the black particles exerts no net force on the membrane, because this pressure, whatever its value, has the same value on both sides, whenever the system is in equilibrium, as in figure 10. (You can consider this an application of Pascal’s principle if you wish.) The pressure of the black particles could be 8 units or 800 units or whatever, and it would not affect the osmotic pressure.
In part B of the figure, the semipermeable membrane can be removed entirely, leaving a wide-open connection.
In part C of the figure, the semipermeable membrane can be replaced by a solid barrier.
If you consider a particle in interior of the top compartment, there is no net force on any of the black particles. There is pressure, but at equilibrium there is no pressure gradient, and therefore no net force.
The same applies to a particle anywhere in the interior of the bottom compartment. There is red partial pressure and black partial pressure, but at equilibrium, there is no pressure gradient, and therefore no net force anywhere in the interior.
The only place in the whole system where there is any pressure gradient is right at the membrane. You can see this gradient at a glance in figure 10, especially part C of the figure, because the gradient in the pressure is associated with a big gradient in the density of the red particles.
Now the key point is that this pressure gradient is 100% "used up" in exerting a force on the membrane. There is nothing "left over" to exert a force on the black gas particles.
To make the same point the other way around, whenever an upward-moving red particle hits the membrane, it gets turned around and sent back downward (otherwise the membrane would leak, and we can’t tolerate that). Therefore at every location where a black particle might be hit by a red particle, it is equally likely to be hit by an upward- or downward-moving red particle.
In contrast, if you look at other systems you can sometimes find situations where one gas exerts an unbalanced force on another gas, such as in a diffusion pump, where you use oil vapor to exert a force on the gas you’re trying to pump. However, this is necessarily associated with flow of the oil vapor, and is therefore dissimilar to figure 10, where the red gas is not flowing.
The key idea is just a simple force-balance argument. In part B of the figure, the forces are in balance, with no force on the membrane, because the black gas moves freely through the membrane. In part C the forces are in balance, with a force on the membrane, because the red gas pushes on the membrane (and the membrane pushes on the red gas). Putting those two parts together, we conclude that there must be a force on the membrane in part A. Otherwise the forces would not be in balance.
As long as the membrane exerts a force on the red gas and not on the black gas, there will be osmotic pressure.
Figure 11 shows what happens in a situation far from equilibrium.
We have an imbalance (between top and bottom) of the partial pressure of the black phase. This is particularly easy to see in part B of the diagram. Since the black phase is free to flow through the membrane, it will do so, under the influence of this imbalance in partial pressure. Flow is indicated by the ⇓ arrow.
If you think about it the wrong way, this flow seems paradoxical. If you focus on the total pressure, as in part A of the figure, the black particles are flowing the “wrong” way, from lower pressure to higher pressure. The smart way to think about it is to focus on the partial pressure of the black particles by themselves, as in part B of the figure. Then you can see that the flow goes the right way, from higher partial pressure to lower partial pressure.
Let’s switch from talking about flow to talking about pressure for a moment. The pressure difference across the membrane in part B of the diagram is not due to osmotic pressure; instead it is associated with the viscous pressure drop as the fluid flows through the membrane. The pressure across the membrane in part C of the diagram is what we properly associate with osmotic pressure. The observed total pressure across the membrane – as shown in part A of the diagram – is the sum of these two contributions A=B+C (with due regard for signs).
Figure 12 is similar to figure 11, but shows the special case where the non-osmotic contributions to the pressure drop are equal and opposite to the osmotic pressure, resulting in zero overall pressure differential across the membrane. This is an important special case, because it frequently arises in practice. In this case we cannot easily observe the osmotic pressure, but we do observe osmotic flow.
If you start with figure 11, it will spontaneously evolve to figure 12 and then asymptotically to figure 10.
You can see that osmotic flow is more complicated than osmotic pressure. The equilibrium situation has osmotic pressure, with no flow.
In figure 10 we can call the red particles the solute, and call the black particles the solvent. In section 2.1 we imagined that the solvent was a gas, i.e. we had one gas dissolved in another, but in this section we imagine that the solvent is a liquid. It turns out that it really doesn’t matter whether the solvent is a liquid or a gas, assuming we are dealing with an ideal solution. (In general, an ideal solute is profoundly analogous to an ideal gas.)
As before, the membrane is permeable to the solvent, so the solvent exerts zero differential pressure on the membrane.
As before, the osmotic pressure is simply equal to the partial pressure of the red particles.
There is a well-known formula for the osmotic pressure, namely
|Π = i M R T (for an ideal solution) (8)|
where Π is the osmotic pressure, M is the molarity of the solute, T is the absolute temperature, and R is the universal gas constant (i.e. Boltzmann’s constant in the appropriate units). We shall discuss the factor of i in a moment.
Equation 8 is often called the van ’t Hoff equation in honor of Jacobus H. van ’t Hoff. (Beware that there is another equation that is also called the van ’t Hoff equation.) The same equation is also called the Morse equation, in honor of Harmon Northrop Morse ... even though it was first derived by van ’t Hoff.
The astute reader will have noticed the correspondence between the ideal gas law (equation 1) and the osmotic pressure law (equation 8), provided we interpret iM as corresponding to N/V, the number of particles per unit volume in the solution. Of course this correspondence is not a mere coincidence; the osmotic pressure must be equal to the partial pressure of the solute, for good fundamental reasons.
The factor of i is necessary because of the conventional definition of the molarity M, which has to do with the number of formula units in solution, whereas the pressure depends on the number of particles.
Note that the idea behind this factor of i applies to gases as well as solutions. That is, the N that appears in the ideal gas law is the number of particles, not the number of formula-units in the gas. As a specific example, under some conditions we can have a gas of acetic acid monomers, CH3COOH, whereas under slightly different conditions we can have a gas of dimers, (CH3COOH)2. If the number of moles of acetic acid formula units is the same in both cases, the N that appears in the ideal gas law – i.e. the number of particles – will be half as great for dimers. That is, we can write N/V=iM, where i=½ for dimers.
Also note that the size of the particle is not relevant to the pressure. A large sugar molecule has the same effect as a tiny hydrogen ion. Naturally, this principle applies equally to gases and to solutes. (It can however be more dramatic in solutions, because you can make a solution of very large molecules more easily than an ideal gas of very large molecules.)
Terminology: Anything that depends on the number of particles, not on their size or chemical nature, is called a colligative property. (The word “colligative” comes from the same root as the word “collective”. The general idea is the same, but “colligative” is used as a very specific technical term.)
Pedagogical remark: I see no reason to remember the osmotic pressure formula. It suffices to remember the ideal gas law, and remember the idea that ideal solute = ideal gas.
To summarize, we have seen three cases, and the pressure is the same in each case:
If the solution is non-ideal, the osmotic pressure will not necessarily equal to what ideal theory would predict.
Adding a solute lowers the freezing point of a liquid. (This is the principle behind the antifreeze that you add to the engine coolant and the windshield washer fluid in your car.)
At the other end, adding a solute raises the boiling point of a liquid.
Both of these effects are colligative.
Both of these effects can be understood starting from the fact that the solute behaves as an ideal gas in the liquid and not otherwise. Consider the system of NaCl plus liquid water in equilibrium with water vapor. In the liquid phase, the dissolved NaCl behaves as a gas. In the vapor phase, we have water vapor with a moderately low density. We don’t have any appreciable NaCl in the vapor phase, because NaCl is a solid under these conditions, with a very low vapor density.
We reach a similar conclusion by different means in the case of NaCl plus liquid water in equilibrium with frozen water. Again, the NaCl dissolved in the liquid behaves as a gas. You can have NaCl dispersed in solid ice, but it cannot behave as a gas. It’s localized.
In either case, the key point is that the solute acts like a gas that is confined inside the liquid phase. This pseudo-gas wants to expand. This makes it energetically favorable to expand the volume of liquid at the expense of the vapor and/or solid.
There is in fact a quantitative relation connecting the osmotic pressure to the freezing point depression and to the boiling point elevation.
This is not a new idea; see reference 1, reference 2, and reference 3.
This point must be emphasized because there is a widespread deep-seated misconception that osmotic pressure has something to do with the pressure of the solvent, i.e. the mobile phase, i.e. the water in typical situations.
It’s hard to know where this misconception comes from. Perhaps it is related to the fact that in omsotic flow, it is primarily the solvent that flows. The fact remains, however, that it is the pressure of the solute that is driving this flow. The solute pushes on the solvent from the inside, trying to expand the volume of solvent.
Osmotic pressure is observed all the time in systems where solvation effects are negligible. We began this discussion with the gas-in-gas case, section 2.1, to provide a clear illustration of this point.
In systems where there are significant solvation effects, you need to account for them also, but they are separate from whatever osmosis is going on.
All this is related to the exceedingly common misconception about relative humidity and dewpoint. All-too-commonly one hears the “explanation” that at low temperature, the “air is unable” to hold as much moisture, as if the air were some kind of sponge. Alas, that’s just wrong.
Let’s get real:
As a warmup exercise, let’s consider a numerical example. The osmotic pressure of normal seawater (relative to pure water) is variously quoted as 22 bar to 27 bar. I don’t know why there is so much variability. Let’s take 25 bar as a middle-of-the-road value.
Suppose you are trying to purify seawater by reverse osmosis. The high-side pressure must be at least 26 bar (absolute pressure), to satisfy equation 6.
since we need the discharge side to have at least 1 bar (absolute) i.e. 0 bar (gauge). (These numbers are plausible for small-scale reverse-osmosis machines for personal use, e.g. for backpacking or emergency survival. In contrast, industrial RO machines use vastly higher pressures, so as to speed up the flow.)
Things get more interesting if we apply equation 6 to seawater just sitting around in an open beaker, subject to 1 bar (absolute) ambient pressure, as shown in figure 13. The blue region represents water, and the red circles represent the solute. The region labeled “abs vac” is absolute vacuum; no solute, no solvent, no nothing.
Mindlessly plugging in the numbers we get:
Obviously there is something not quite right about this; we have left something out of the analysis.
On the seawater side of the membrane, the pressure gauge reads 1 bar (absolute) as it should. On the other side of the membrane, there is no solute and no solvent either. If you put a drop of pure water in there, it would promptly get sucked over to the other side. The gauge on the low-pressure side reads zero pressure (absolute), whereas equation 10 seems to predict -24 bar, so we have some explaining to do.
The first half of the explanation deals with the low-side gauge. The gauge is just an ordinary mechanical gauge. The piston or Bourdon tube (or whatever it uses) can push on the water, but it cannot pull on the water. If the water pressure tried to go negative, the water would just disconnect from the gauge. You can call this cavitation if you wish. It’s hard to make a gauge that reads negative absolute pressure on the scale we’re talking about (although there are always small effects due to surface tension et cetera).
The second half of the explanation deals with the seawater. If the seawater is subjected to a huge negative pressure, why doesn’t it cavitate? Well, for starters, the solute pressure is not really a negative pressure applied to the outside of the liquid; it is a positive pressure applied to the inside of the liquid. This internal pressure i.e. the osmotic pressure i.e. the solute pressure is restrained by the cohesion of the liquid.
And the solute cannot detach from the solvent the way the gauge detached from the solvent. If you have a liquid in a cylinder, and pull out the piston, cavitation allows the piston to move, so that the cylinder volume expands, without stretching the liquid. In contrast, cavitation does not allow the solute to expand, because the solute cannot enter the cavity. The solute doesn’t care about the volume of the cylinder, only about the volume of the solvent.
To repeat: The true solvent pressure should not be considered a negative pressure applied to the outside of the solution. The true solvent pressure does not arise from anything pulling from the outside the solution; it arises from the solute pushing outward from the inside.
In contrast, the so-called solvent pressure in equation 6 and equation 10 is what a gauge (on the outside of the system) reads if/when it is in contact with the solvent. When there is good contact, the gauge force (acting from outside) just balances the solvent pressure (acting from inside) so the equations make sense. When there is not good contact, the solvent pressure is balanced by the cohesion of the liquid, and the equation doesn’t tell us anything about the gauge reading.
The idea of “negative absolute pressure” is not my favorite way of describing the situation, but it does give a correct description of some aspects. This includes the energy budget. You can use equation 6 (and equation 10) without worrying about whether the pressure is negative or not. The P Ṿ energy term does not care whether the pressure is due to a pull from outside or a push from inside. This also includes a broad range of colligative effects, including vapor pressure (reference 4) freezing point (reference 5) et cetera.
Via the P Ṿ term, the solute pressure makes it energetically favorable to expand the volume of the solution – even if the expansion has to suck solvent uphill against a modest gradient in total pressure, as in figure 12.
In a certain unconventional sense, the fact that the solute is trapped in the solvent can be considered a solvation effect. That is, if you measure the energy of the solute relative to its vapor you find that it is energetically favorable for the solute to remain trapped in the solvent. (This stands in contrast to the conventional notion of energy of solvation, which would measure dissolved salt or sugar relative to the corresponding solid, not vapor.)
Cohesion is necessary to balance the force budget, as indicated in figure 14. Let’s consider a parcel of fluid (the large, light-blue rectangle) and analyze the forces on one edge. The solute is represented by the red circles. Solute pressure creates an outward force acting on the surface, as shown by the red arrows. Meanwhile, cohesion in the liquid is represented, metaphorically, by springs in tension. This cohesion produces an inward force acting on the surface.
The idea of “negative” absolute pressure in connection with osmosis is not new. A simple, elegant, and incisive argument can be found in reference 4. For a more modern and more detailed review, see reference 6.
In the case where the solvent is a gas (not a liquid), we never observe a situation where the solution is under tension. There are two reasons for this: there is no cohesion in the solvent, and the solute is not trapped in the solvent.
Here’s an interesting exercise: Put a check-mark in the box next to the best answer. Choose the best way of completing this statement: Other things being equal, for an ideal gas, ...
□ (A) ... when the temperature goes up, the volume goes up, and when the temperature goes down, the volume goes down.
□ (B) ... when the temperature goes up, the volume goes down, and vice versa.
Do you know the answer to that question? I sure don’t. It could go either way ... as you can see by comparing section 1.2 to section 1.3.
We can gain additional insight into this situation by looking at the following figures.
In figure 15, the gas cools as it expands. You can see that if we move left-to-right along a contour of constant entropy, we cross contours of constant T, moving toward lower T values. The contours of constant entropy are steeper than the contours of constant temperature.
In figure 16, if you want the gas to expand at constant pressure, you have to heat it up. You can see that if we move left-to-right along a contour of constant pressure, we cross contours of constant T, moving toward higher T values. The contours of constant pressure are less steep than the contours of constant temperature.
There is not any law of nature that prefers the isentropic scenario (section 1.3) to the isobaric scenario (section 1.2) or vice versa. Both scenarios can be found in nature, to a good approximation, if you know where to look. Either scenario can be realized in the laboratory, to a good approximation, if you engineer the experiment appropriately.
Some people labor under the impression that since the ideal gas law nicely describes the isobaric scenario (section 1.2), the isentropic scenario (section 1.3) must involve an exception or a breakdown of the ideal gas law. This is completely wrong; the isentropic behavior described in section 1.3 is perfectly consistent with the ideal gas law.
Some people labor under the impression that the isothermal scenario (section 1.4) is “normal”. That’s the result of working slowly, studying only small systems, and not looking very closely: they will never notice much departure from isothermal behavior. In fact, many other behaviors commonly occur, as illustrated by the examples below.
Large-enough systems can be expected to be isentropic, to a good approximation, if the timescale is slow enough to prevent undue frictional dissipation, yet fast enough to prevent undue thermal conduction. Examples abound, including:
Of course, not everything is isentropic:
There is one more technique you need to know to apply the gas laws to things like the atmosphere.
In all the scenarios considered in section 1, the volume V was just the volume of the container. So the question is, what is the relevant V when we are talking about the atmosphere, or any other system where there is no relevant container?
The answer is that we talk about a parcel of gas. That is, take some region of space, and imagine marking all the gas that is initially within that region. We then follow the parcel of marked gas as time goes on. The parcel will move. The parcel will change shape, but that doesn’t matter, except insofar as the volume changes (compressing or expanding the parcel). The boundaries of the parcel might become a bit blurry, as molecules diffuse across the boundary to/from adjacent parcels, but this won’t greatly affect the properties of our parcel, especially if neighboring parcels have approximately the same properties anyway.
The gas laws apply to each parcel of gas separately.
This technique of dividing the overall system into a number of parcels is completely routine in the fluid dynamics business.
Let’s take another look at the exercise mentioned at the beginning of section 3.1. The first few words of the stem should have been a dead giveaway. The words Other Things Being Equal should always make alarm bells ring in your head. I call this the OTBE fallacy. Almost any statement that involves other things being equal is underspecified, because you rarely know which other things are being held equal.
It is easy to explain this to young students. Here’s an analogous exercise: Put a check-mark in the box next to the best answer. Choose the best way of completing this statement: Other things being equal, for the lever in figure 17, ...
□ (A) ... when point W goes up, point Y goes up, and when point W goes down, point Y goes down.
□ (B) ... when point W goes up, point Y goes down, and vice versa.
Do you know the answer to that question? I sure don’t. The answer very much depends on whether point B is the fulcrum or point D is the fulcrum.
You can make an unforgettable classroom demonstration using a meter stick or suchlike. Attach a bit of tape to point Y to focus attention there. Then choose a pivot-point (X or Z), wiggle point W, and observe what happens to point Y.
This is an example of what we call an ill-posed problem. For a more general discussion of ill-posed problems and how to deal with them, see reference 7.
Returning to the gas laws, we see the crucial importance of keeping track of which variables are being held constant. In particular, P1 V1 = P2 V2 is not by itself law of nature, and by the same token P1 V1(γ) = P2 V2(γ) is not by itself law of nature. The former is valid within the isothermal scenario (section 1.2), while the latter is valid within the isentropic scenario (section 1.3). It would be the most foolish of fallacies to take something that is true only within a particular scenario and pretend it was true more generally.
To say the same thing mathematically, P1 V1 = P2 V2 is merely part of a system of three equations, while P1 V1(γ) = P2 V2(γ) is part of a different system of three equations.
Let’s collect the three-equation systems that appeared in the scenarios discussed in section 1 ... plus a couple of related systems. This is inelegant, but let’s do it anyway:
If you include the possibility that N could be a variable, the number of systems gets even larger.
Practicing scientists generally don’t remember the gas laws in the form suggested by list 11. That’s because it is far easier to remember just two equations:
This is a win/win situation. These two laws are simpler ... yet also more powerful, more general, more sophisticated, and more elegant than the hodgepodge in list 11. Why be inelegant when being elegant is easier?
Each three-equation system listed in list 11 can be seen as a narrow corollary of equation 12, narrowed by scenario-specific information about what is being held constant. You don’t need to memorize these corollaries, because whenever they are needed, you can re-derive them in less time than it takes to tell about it.
For those who are seriously math-impaired, I will spell out the derivation in lurid detail just once. Let’s take the example of equation 2. How do we get from line 1 to line 2 of that equation?
Of course, someone with a little skill could do the same calculation in far fewer steps; I spelled out the details just to emphasize the point that each step is well-founded in elementary algebra. This is the same sort of algebra that is required for a gajillion other tasks, so you might as well learn it.
The root iso- comes from Greek and means “same”. It is widely used to coin scientific terms, including:
|isobaric||≡||at constant pressure|
|isothermal||≡||at constant temperature|
|isochoric||≡||at constant volume|
|isentropic||≡||at constant entropy|
|exergonic||≡||at constant energy|
The term adiabatic is troublesome. Thoughtful experts use the word in two inequivalent ways:
I recommend avoiding the term “adiabatic” whenever possible. If you mean isentropic, say isentropic. If you mean thermally insulated, say thermally insulated.
Some textbooks like to give names to a few of the items in list 11. In particular, Boyle’s Law is sometimes mentioned in this context. I have no idea which law goes with that name. Most practicing scientists have no idea, either. They simply remember P V = N kT and P V(γ) = f(S), and that’s all there is to it.
Charles’s Law and Gay-Lussac’s Law are also sometimes mentioned, but many practicing scientists have no idea what those refer to; if you told them Gay-Lussac’s law was a corollary of Murphy’s law they might well believe you.
Naming the corollaries to the gas laws is not a terribly big problem; on the scale of things it is trivial compared to the gross misconceptions discussed in section 6. The objections here are pedagogical rather than technical:
The notion of “ideal gas” has the advantage of simplicity, and has rather wide real-world applicability.
However, this simplicity is something of a two-edged sword. Ideal gases should not be over-emphasized, and should not be used as the only example of a thermodynamic system, because many things that are true of ideal gases are not true in general.
For example, in a monatomic perfect gas, all of the heat capacity can be explained in terms of kinetic energy (and changes thereof). This is true for the simple reason that all of the energy is purely kinetic.1 Obviously this isn’t true of all thermodynamic systems, but generation after generation, students somehow pick up the idea that there is such a thing as “thermal energy” distinct from other kinds of energy (which there isn’t), and that the heat capacity of ordinary materials can be explained in terms of microscopic kinetic energy (which it cannot). (For example, the energy associated with the heat capacity of an ordinary solid is very nearly half kinetic energy and half potential energy.)
It is usually a bad idea to discuss misconceptions. That’s because there are always an infinite number of misconceptions, and to discuss even a small fraction of them becomes a horrific waste of time. Usually it is better to describe the right way of doing things, and then move on.
However, an exception can be made for misconceptions that are particularly common and/or particularly destructive.
Sometimes people who ought to know better present the following list to their students
and tell them that in any given scenario, the applicable equation can be selected using the “principle” that the variables not mentioned are being held constant. For example, the first line in list 14 doesn’t mention N or V, so allegedly those variables “must” be considered constant. That is, alas, completely bogus. The first line doesn’t mention number, volume, entropy, enthalpy, free energy, free enthalpy, and/or innumerably many other variables that might be relevant in this-or-that scenario. If you know which variables are being held constant, and look down this list to find an equation that doesn’t mention them, you will almost certainly find an equation that is not applicable.
The just-mentioned procedure has absolutely no basis in physics, mathematics, or logic.
The only way such a procedure can even appear to work is in the fairy-land situation where the person asking the questions and the person answering the questions are both so clueless that they are unaware of scenarios where variables other than N, V, P, and/or T are held constant.
The fundamental problem is that none of the equations in list 14 is, by itself, a law of nature. There are only two ways to repair this situation.
To say the same thing more formally, each equation in list 14 is merely part of a system of three equations ... a different system in each case. You have to learn each system as a whole, which takes us back to a subset of list 11. It is permissible to choose an appropriate line from list 11 ... based not on which variables are missing, but rather on which variables are explicitly held constant by equations on the chosen line.
Copyright © 2006 jsd | http://www.av8n.com/physics/gas-laws.htm | 13 |
104 | One of the most important physiological constraints that has impacted the evolution of the human lineage is thermoregulation. Thermoregulation is any physiological or behavioral mechanism that acts to retain or generate body heat in environments colder than body temperature in order to prevent hypothermia, or that acts to remove body heat in environments warmer that body temperature in order to prevent hyperthermia.
There are two main categories used to describe species with different body temperature systems, poikilotherms and homeotherms. Poikilotherms are organisms whose body temperatures are relatively variable, and change with the changing environmental temperature. Homeotherms are organisms that maintain their body temperature within a very narrow range through a variety of thermoregulatory structural and physiological devices in widely varying environmental temperatures. Homo sapiens is a homeothermic species that must maintain a relatively stable body temperature no matter what environmental conditions individuals are exposed to. Humans differ in many important aspects from other primate species in the structure and function of their thermoregulatory devices, and these changes are most probably related to the evolutionary pressures our ancestors faced at some time in the past.
This chapter will describe the structure and function of the modern human thermoregulatory system, and place it in a comparative context with other closely related primate species. The possible selective pressures that likely faced our ancestors will be discussed in an explanatory framework under which certain features of the modern human condition could have evolved. The chapter will end with a discussion of various hypotheses that have been put forth to answer questions regarding the evolution of the human thermoregulatory system.
Heat Loss and Retention
The function of the thermoregulatory system of a homeothermic organism is to maintain a constant core body temperature under different environmental conditions. This involves mechanisms and adaptations that prevent excessive heat loss or that produce heat in environments colder that the core body temperature, and mechanisms and adaptations that prevent excessive heat gain or that remove heat in environments warmer that the core body temperature. These mechanisms and adaptations involved in thermoregulation act to balance thermal inputs and thermal losses. Thermal input comes from a variety of sources that include the air or water temperature of the environment when the temperature is greater than body temperature, radiant solar energy or radiant energy reflected from sources such as the ground or rocks, and the internal metabolic heat load from physiological functions. Thermal loss comes from a variety or sources as well, including convection to the environment when the temperature is lower than body temperature and evaporation across mucous membranes or through sweating.
The thermal input from convection between the environment and the skin of an individual is an important heat stress when the thermoregulatory system is working to remove heat, but it is not involved in the system to retain heat, as input will only occur if the ambient environmental temperature is greater than the temperature of the core body temperature. So while not an important heat source when trying to maintain body temperature in the face of excessive heat loss, it is an important stressor upon the thermoregulatory system when faced with excessive heat gain. The second important environmental heat input is that from direct and indirect solar radiation. Solar radiation can be broken up into long-wave and short-wave radiation. Long-wave radiation is absorbed equally regardless of skin or hair color, while short-wave radiation absorption depends on the color of the skin or hair of the organism (the darker the color, the greater the absorption). The heat input from this source can account for a significant proportion of thermal stress is some environments where the organism is being bombarded by direct solar radiation (Wheeler 1991a).
A major source of thermal input is metabolic heat load from various physiological functions. The metabolic heat load can actually be up to ten times that of the environmental heat load when under sustained aerobic metabolic stress in an intermediate to large bodied homeotherm (Taylor 1977). Sources of metabolic heat include metabolic thermogenesis, contractile thermogenesis, and the lipolysis with oxidation of fatty acid (Chatterjee 1979). Metabolic thermogenesis occurs as glucose or lipids are metabolized by the citric acid cycle or in other metabolic pathways as waste heat is produced as chemical reactions occur. This metabolic source is generally greatest due to the concentration of the largest mass of metabolically active tissue. The brain also has a very high metabolic requirement and therefore produces a large amount of waste heat through action potential, synaptic, and membrane transport activity. Contractile thermogenesis is the production of frictional heat that occurs with the stretching of the elastic muscles or tendons. When a muscle is contracted, only about 22% of the energy produced by metabolic activity goes towards the work of the muscle, the rest appears as heat energy (Sanyal & Maji 2001). Lipolysis is the breakdown of fatty tissue with a resulting production of metabolic heat. This is extremely important in colder environments, and is an important process when considering the evolution of human heat stress adaptations and the resulting weakness to cold stress.
Just as convective heat gain from the environment is less of a thermoregulatory input than it is a stress on the system, convective heat loss to the environment is a stress on the thermoregulatory system when the organism needs to retain heat in colder environments. The heat transfer across the skin to the environment can be great, and is an important factor in human thermoregulation. Besides convection with the environment, evaporation is the only heat loss mechanism available physiologically to humans. Evaporation takes place in the lungs and nasal chambers across mucous membranes in many homeotherms, and some behavioral adaptations like saliva spreading can also increase the evaporative cooling efficiency of an organism (Robertshaw 1985). In humans, the greater proportion of evaporative cooling occurs through sweating, and very little evaporative heat loss occurs through respiration due to the small nasal chamber and the associated reduction in the area of mucous membrane available for evaporation to occur.
Thermoregulatory Adaptations and Behaviors
A variety of adaptations, physiological devices, and behaviors have been noted to address the thermoregulatory priorities of mammals. All are important in the understanding of human thermoregulation and its evolution history. The important mechanisms that must be discussed when addressing the issue of human thermoregulation are body proportions, metabolic regulation, vascular compensation, body fat, behavioral modification or cultural practices, body hair, and sweating.
The adaptation of body proportions to environmental conditions (ecogeographical patterning) is a well-established principle of evolution. This premise is based on early work by Bergmann (1847), who noted that populations of the same homeothermic species will differ in average weight with those in colder climates heavier than those in warmer climates, and Allen (1877), who noted that populations of the same homeothermic species will differ in average limb length with those in colder climates having shorter limbs than those in warmer climates. These rules are special cases of the general relationship between surface area, body mass, and ambient environmental temperature. Fourier’s Law of Heat Conduction indicates that the rate of heat loss from the core of an animal is directly proportional to the surface area available for heat conduction to the environment and inversely proportional to the distance that is needed to be traveled from the core to the surface. If an object increases in size allometrically the volume will increase at a much greater rate than the surface area, which means that the surface area to volume ratio will decrease.
Since heat loss is directly proportional to the surface area and inversely proportional to the volume an increase in body size will decrease the rate of heat loss (as a proportion of body heat) through convection across the skin/environment gradient (Ruff 1991). This relationship has been studied in many living animals (including humans) and is particularly robust (Riesenfeld 1981; Holliday 1995). This relationship also means that with increasing body size, the removal of body heat becomes more important than the retention of body heat, because less and less heat will be lost with increasing size. This is an important implication for human evolution because humans and their hominid ancestors are relatively large mammals.
Metabolic regulation of body heat is an important aspect of the human physiological thermoregulatory response. In cold environments the metabolism will speed up, causing an increase in metabolic thermogenesis, and conversely, in warmer environments the metabolism will slow down, causing a decrease in thermogenesis. The level and efficiency of the metabolism is regulated by several hormonal and neuro-regulatory mechanisms. The thyroid gland can both increase and decrease the metabolic thermogenesis by regulating the release of thyroxine into the body, producing both short-term and long-term changes in metabolic thermogenesis. In addition, release of adrenaline and nor-adrenaline from the adrenal medulla can also cause increased thermogenesis. The neuro-regulatory system also can initiate reflex shivering in muscle tissue, which is the activation of muscle tissue in continuous transient contraction-relaxation cycles that produces waste heat (Hanna & Brown 1983; Senay et al. 1976; Bass et al. 1955; Lind & Bass 1963).
The human vascular system has developed reactions to both heat stress and cold stress. The skin has a system of thermal receptors that perceive temperature of the skin and send signals carrying this information to the autonomous nervous system. When the body perceives increased heat loss through the skin, vasoconstriction of the peripheral vascular system occurs to decrease the blood flow (which carries heat from the core to the surface). In humans this vasoconstriction can reduce heat loss by 1/6 to 1/3 depending on the individual and the acclimation of the individual to cold stress. When the body perceives a need for increased heat loss, vasodilation occurs, with increased blood flow to the peripheral vascular system. This vasodilation increases the rate of heat transfer from the core to the surface, and is also an important feature involved with sweating. This vascular compensation has associated changes in the internal vascular system, and in vasoconstriction, increased arterial pressure occurs due to the reduced volume of vessels with the maintained blood volume. In vasodilation, constriction of the internal vessels occurs that can increase an individual’s susceptibility to postural hypotension, and can lead to transient cerebral ischemia (Senay et al. 1976; Bass et al. 1955; Lind & Bass 1963).
Body fat is located under the dermis in a subcutaneous layer. This fat can have several functions in mammals that may or may not be related to thermoregulation. Adipose tissue can act as a source of stored energy, thermal insulation, and a source of chemically produced heat. In some water mammals an insulating layer of blubber has developed that prevents excessive heat loss from the body to the environment, but the functional significance of the fat layer in humans as an insulating device is negligible. In humans the fat layer acts as a source of stored energy, and with regard to thermoregulation, acts as a source of thermal heat through lipolysis. When the fatty acids of this tissue are oxidized to produce energy, a large quantity of heat is produced. The development of a thick layer of subcutaneous fat may be an adaptation to the loss of body hair and the potential cold stress that the human ancestor would have experienced, or may simply be a feature of the behavioral capacity of humans to get access to food.
Behavioral modifications can include any behavioral feature that can mitigate heat loss or produce heat in a cold environment or mitigate heat gain or release heat in a warmer environment. These practices can include simply staying out of the sun during the hottest part of the day, seeking shade when moving through a hot environment during the day, or building nests or seeking sheltered areas in cold environments (Wheeler 1994; Napier & Napier 1985). Cultural practices might include things such as the use of fire, the production of clothing, the building of shelters, or other human cultural practices (Kushlan 1985). These cultural and behavioral features are important ways in which humans and their ancestors dealt with – or could have dealt with – heat or cold stress, but these features can be very variable and their effectiveness and presence are hard to identify among fossil species.
Body hair is one of the most important features of the mammal thermoregulatory system. Hair can act as an important heat retention and heat prevention device in mammals. By trapping a layer of dead air against the skin, a layer of hair can act as an extremely efficient insulation, reducing the rate of convective heat loss to the environment. However, this exact same system acts as a way to prevent heat gain from the environment by the same principle; by using this layer of dead air to reduce the rate of convective heat gain from the environment to the skin. Besides insulation, the layer of hair on mammals is important in reducing the radiation from direct and indirect sunlight, and can thus act to reduce heat gain from the environment in two ways.
Sweating is one of the thermoregulatory traits that set humans apart from most other primates. Sweating acts as a heat loss mechanism through evaporation. The sweat on the surface of the skin is mostly water and has a high specific heat. Heat is removed from the skin by conduction to the sweat until the sweat is either evaporated or sloughed off. Sweating is most effective if the sweat is actually evaporated, since it is removing more heat per unit of sweat being produced. Sweating is highly adaptive in modern humans and is associated with the development of eccrine glands throughout the skin, the reduction of body hair, and vascular compensation (Montagna 1985). Eccrine glands are specialized sweat glands that produce large quantities of sweat that is mostly water, and has coevolved with the loss of body hair to create a more effective evaporative heat loss mechanism. Vasodilation increases the blood flow from the core to the periphery while increasing the rate of heat loss to the skin, which is then removed through the evaporation of sweat (Ebling 1985). While all the mechanisms for controlling body temperature that have been mentioned are important when considering the evolutionary perspective of human thermoregulation, the loss of body hair and the coincident evolution of eccrine glands throughout the body deserve more detailed description.
The Development and Structure of Hair
A major difference between humans and other primates is the great reduction of body hair in humans relative to other primates. The question of this difference has often been asked in the context of the loss of human hair, but this is not strictly true. Modern humans are not hairless in the sense of being glabrous, but rather have had a significant diminution of body hair length and thickness relative to other primates. The only hair structure that has been truly lost in the human condition is the vibrissae, which are the sensory whiskers that are common in most mammals (Van Horn 1970). Other categories of hair form are seen commonly in humans, and are no different developmentally from other African Apes. Human hair can be distinguished into three main categories of hair that include lanugo, vellus and terminal hair types. Lanugo hair is the first hair produced by the developing hair follicles during prenatal development. This hair forms subdermally by the third or fourth month of fetal development, and by the fifth month the hair structure has appeared on the external surface of the skin of the fetus. This hair tends to be long, unpigmented and very fine, and is generally shed by the eighth month of fetal development, though it may persist until a month or more after birth (Gray 1974).
The first hair that is produced during the post-natal life of an individual is vellus hair. The vellus hair is generally short, unpigmented, very fine, and unmedullated. This is the hair that primarily covers the human body in most individuals, the hair that covers the so-called “hairless” parts of the skin, such as the forehead or the nose. This hair may also replace terminal hair in individuals with androgenic alopecia, the typical form of baldness that is seen in many human beings. Terminal hair tends to be long, coarse, pigmented, and medullated. Among humans, terminal hair covers the scalp, axillae, pubic area, and the face and chest of some males. Hair follicles are capable of producing either vellus or terminal hair, and under hormonal stimulation can switch from producing one to the other (e.g. during puberty, when vellus hair in the pubic region begins to be replaced by terminal hair, or in baldness when vellus hair replaces terminal hair) (Montagna 1976).
The embryogenesis of hair follicles and hair is an important consideration in the discussion of the loss of the human pelage, as it is during this development that the apocrine gland system develops, the implication of which will be discussed in detail later. The development of the hair follicle begins with the establishment of the dermal papilla (DP). The DP is a group of specialized dermal fibroblast cells that are derived from the mesoderm and begin to aggregate below the epidermal layer. In the epidermal layer, an epidermal plug forms above the DP and grows into the dermis toward the DP as the cells proliferate. The plug begins to differentiate into three distinct cell buds as the proliferation of the cells progress. The cell bud closest to the epidermal layer may develop into an apocrine gland, or will gradually regress as the hair follicle matures.
The cell bud in the middle gradually develops into the sebaceous gland, while the final cell bud below forms what is called the “bulge”. This bulge on the hair follicle is the site of attachment for the arrector pili muscle (the muscle that pulls the hair perpendicular to the skin when one gets “goose bumps”), which develops separately from the hair follicle and eventually attaches to this site. As the epidermal plug penetrates further into the dermis, mesodermal cells begin to surround the plug and develop into the fibrous follicular sheath that surrounds the epidermal cells. As the epidermal plug is differentiates and migrates into the dermis, the DP develops into a structure of rounded cells that contain organelles vital for product synthesis, and its cells becomes non-proliferative. The DP cells then communicate with the epidermal plug, effecting differentiation into concentric layers that will eventually become the hair fiber and the inner and outer root sheath encasing the hair. These cells begin to keratinize and die higher up in the layers while the deeper cells continue to proliferate and force the keratinized hair fiber to extrude from the surface of the skin (Holbrook 1991).
The growth of the hair itself within the follicle follows a cyclical pattern. There are three main stages of human hair growth: anagen, catagen, and telogen. The anagen phase is further subdivided into proanagen, mesanagen, and metanagen. Anagen is the active growth phase of the hair, and the state in which any individual hair will spend most of its life cycle. In the proanagen phase, the initiation of growth with RNA and DNA synthesis in a follicle occurs, which then quickly progresses through mesanagen to metanagen with the resulting maximum hair length and thickness achieved. In this mature state of proliferation and differentiation the hair follicle is made up of eight concentric layers, and melanogenesis will occur within pigmented hair follicles. The anagen phase is followed by catagen, which is a period of controlled regression of the hair follicle where the dermal papilla migrates upward and the growth of hair is reduced. This results in a weaker fit of the hair in the root sheath and a thinner hair. Finally, the hair follicle will enter the telogen phase, a resting state where little or no growth occurs, and where the hair will fall out or can easily be pulled out. At this stage a new dermal papilla will begin forming in the dermis and a new hair will begin to grow, forcing the old hair out of the skin if it has not already been removed (Moretti et al. 1976). In many mammals, adjacent hairs go through these cycles in waves (the loss of gain of the “coat” in particular climatic seasons), but in humans the hair proceeds through these stages independently of any other follicle, and during any one moment approximately 90% scalp hair will be in the anagen phase, and 10% will be in the telogen phase (Kligman 1988).
The Sweating Mechanism in Humans
The sweating mechanism of modern humans is the single most important thermoregulatory device available to reduce heat load on the body and likely has coevolved with the loss of body hair in the human lineage. Sweating is a thermoregulatory mechanism of modern humans that effectively removes body heat through evaporation. It becomes extremely effective in the absence of heavy body hair, and actually can be maladaptive in the presence of heavy hair cover. The structure and function of the human sweat glands share some common aspects with the African Apes (though much less with other closely related primates such as the orangutan), and also have some relatively unique features. As in most primates, the human system of surface excretory glands includes three structural forms: sebaceous, apocrine, and eccrine glands. Sebaceous glands are tied to every hair follicle and produce small amounts of oils and fluids that maintain the suppleness of skin and hair fibers. These glands are important in the production of odors at the axillae and pubic regions (most likely tied to sexual selection), but have little to no significance in terms of sweating prodigiousness or efficiency. Apocrine glands also develop in association with the hair follicle and secrete by the rupture of cell apices at their luminal margins, releasing fluids containing cellular components. These cellular components are broken down in the bacteria of the microenvironment of the axillary areas of the body, producing the body odors in humans. Eccrine glands develop independently from the hair follicles and produce large quantities of sweat that is composed primarily of water and mineral salts. Sweat from these glands does not produce odor, and is the primary cooling mechanism of modern humans through evaporative cooling (Ebling 1985; Montagna 1985).
The distribution of these glands along the body follows a very regular pattern. Sebaceous glands are found in association with all or nearly all hair follicles and have no known functional significance in thermoregulation. Apocrine glands are found in the axillary regions (the pubis, the perianal region, and the axillae) in humans, whereas in most non-human primates (excluding gorillas and chimpanzees) they are found throughout the entire body. It is important to note that in the human fetus apocrine glands begin to form all over the body in association with hair follicles, but are mostly resorbed into the body during development. Eccrine glands are found over the entire surface of the body, in both hairy and non-hairy areas, and have no developmental tie to individual hair follicles. In most non-human primates (again excluding gorillas and chimpanzees) eccrine glands are only found on surfaces used in locomotion (the soles of the hands and feet, and among the dermatoglyphics found on places of high friction, like the tails of prehensile species or the knuckle pads of knuckle-walkers. There are two phylogenetically distinguishable types of eccrine glands, those found in the friction areas of most species of primates and many other mammals, and the ones found in humans, chimpanzees, and gorillas throughout the entirety of their body. In humans, the eccrine glands of the volar surfaces of the hands and feet begin to develop around three-and-a-half months of fetal development, and the remaining eccrine glands begin forming separately around five-and-a-half months (Montagna 1985; Robertshaw 1985).
The sweat glands themselves also exude their secretions at different rates and under different pressures. For example, the sebaceous glands continuously produce secretions to maintain the hair and skin surface, the apocrine glands produce continuously in the axillary regions, the eccrine glands in the friction surfaces also produce relatively constantly at a low background rate, and the eccrine glands spread throughout the body can either not produce secretions over extended periods of time, or produce vast quantities (more than twice as much as any other mammal) of sweat over time. There are two ways in which these secretions are controlled: adrenergic and cholinergic stimulation. The apocrine glands and the eccrine glands of the friction surfaces are controlled by adrenergic sympathetic nerves, and are stimulated by emotional stress (associated with higher production of hormones like adrenaline), while the eccrine glands of the rest of the body are controlled by cholinergic nerves, and can be stimulated by heat stress. This system is relatively constant in non-human primates (Robertshaw 1985).
The functional aspects of the various glands are also different for the different glands. The sebaceous glands – as mentioned previously – act as lubricants for the skin and hair, maintaining moisture content and prevent the skin and hair from drying out and cracking. The apocrine glands in humans (and gorillas and chimpanzees) seem to have a sexual function, and produce odors. This occurs through the combination of the sebaceous, apocrine, and eccrine glands. In the axillary areas the sebaceous and apocrine glands produce a constant stream of secretions of high cellular content. When the eccrine glands are stimulated to produce sweat, the sweat mixes with the secretions of the sebaceous and apocrine glands and spreads throughout the axillary area. In the moist microenvironment of these areas, bacteria is abundant on the skin and on the hair, and when given access to the organically saturated sweat, begins to break down the cellular components, with body odor as the result of this process. The sweat excretions themselves are odorless. This mechanism only occurs in humans and the African Apes, and does not occur in orangutans or other non-human primates (which usually have a scent-producing organ in a sternal pit above the manubrium). Since this mechanism only begins working after puberty (apocrine glands are primarily non-functional in prepubescent humans) it seems likely that this is a product of sexual selection and has a function in mate selection rather than any sort of friction reducing function (Montagna 1985; Hanson & Montagna 1962; Perkins & Machida 1967). The eccrine glands that cover most of the skin of humans seem to have one single function, thermoregulation through evaporative heat loss, and this function is apparent only in humans. In chimpanzees and gorillas the eccrine glands on the body are nonfunctional and do not react to thermal stress.
Evaporative heat loss through the use of the eccrine gland system in humans is extremely effective at removing unwanted body heat. Evaporative heat loss occurs through the secretion of sweat primarily made of water, the heat exchange from the skin (that is artificially maintaining a larger proportion of heat due to the highly developed vascular system that moves heated blood from the body core to the surface at high rates) into the sweat, and the evaporation of the sweat as it is either sloughed off or evaporated into the air. Effective sweating requires as little hair cover as possible, as it needs air contact (particularly moving air) over the skin to remove the heated sweat. In an individual with a prodigious sweating mechanism and dense hair cover, the heated sweat will generally be retained by the hair cover and actually begin to act as insulation preventing heat loss, leading to hyperthermia. Non-evaporative conduction becomes less and less effective with increased body size (due to the decreasing surface area to body size ratio) and with increased ambient temperature (due to a lower temperature difference between the environment and the organism) making evaporative heat loss through sweating an extremely important adaptation in the relatively large bodied ancestors of humans in the warm climate of Africa (Robertshaw 1985; Schwartz and Rosenblum 1981).
Although hair can impede heat loss through sweating, it is important in thermoregulation for maintaining body heat in either colder environments or at night and for the reflectance of solar radiation away from the body. Dense hair cover is very efficient for both of these purposes, and it is an important question to ask why the selection for increased sweating efficiency for heat loss would be more important than the selection for the heat retention mechanism of hair cover and the heat prevention mechanism of solar radiation reflectance. Dense hair cover is an effective heat retention device, but only in smaller animals. As animals in increase in size, the effectiveness of hair cover decreases. This is due to the reduced ratio of skin area to volume as mass increases. As body size increases, the amount of metabolically derived heat increases dramatically, but the ability of the organism to effectively lose this heat is retarded by the decreased ratio of surface volume. Thus, the percentage of heat lost to the environment by conduction decreases, simply because the organism loses the ability to lose heat as size increases. This means that the hair cover is less and less evolutionary meaningful for the retention of body heat. The corollary of this axiom is that as body size increases, and the metabolic heat load increases, there is an increased need for mechanisms to remove heat in hotter environments or in periods of high metabolic heat production. So as body size increases, dense hair becomes less and less effective at retaining body heat, and more and more maladaptive for removing body heat (Schwartz & Rosenblum 1981; Robertshaw 1985).
The obvious solution to this situation is decreased body hair with increasing body size, which is exactly what is seen in anthropoids. When the number of hair follicles present in species per unit of area is compared with body size, all primates (including humans) fit along a regular log linear regression line, along which the density of hair per unit of area decreases as body size increases. Species like chimpanzees and gorillas have relatively fewer hair follicles per unit area of skin compared to the smaller monkeys. Humans fall along this line, and have a relative hair density almost the same as seen in chimpanzees, gorillas and orangutans. The difference between the thick pelage of the Great Apes and humans is not in terms of the density of hair, but in its length and thickness and the production of vellus hair in most humans to the exclusion of terminal hair on the body. Humans are not “hairless”, but are merely covered by thinner, smaller and unpigmented hair (Schwartz & Rosenblum 1981; Schultz 1931).
The effect of solar radiation reflectance on a selection pressure to maintain body hair has been considered many times, but a satisfactory answer has not been explicated. It is clear that thick body hair protects most mammals by reflecting solar radiation, and some theories have tried to account for this (Wheeler 1991a), but it seems likely that it is more a result of competing pressures being stronger, leading to the reduction of hair even in the face of this pressure to maintain body hair for reflective purposes. Attempts to quantify the relative importance of reflective hair in the question of human hair loss may be a false problem, as we know that humans do have little hair, and it is generally agreed that our ancestors at some point did have a heavier coat of hair. It may be less important to ask why hair loss developed in the face of this particular pressure to maintain it but rather what other pressures could have caused the hair to be selected against.
Expectations of a Putative Human Ancestral State
The preceding description of the modern human thermoregulatory system and the structure and function of its individual parts allows the reconstruction of an expected ancestral state when one compares the differences between humans and their closely related primate species. It is generally assumed that shared descent is more reasonable than paralleled development, and thus from this assumption of parsimony (keeping in mind the chance of parallelism inherent in closely related species inhabiting similar environments or facing similar selection pressures) several points clearly present themselves.
The presence of phylogenetically distinct and functionally and structurally differing sweat glands within humans and several closely related species indicate that either the distribution of eccrine glands (and associated reduction of apocrine glands throughout the body) can be traced back to the common ancestor of chimpanzees, gorillas, and humans, that it can be traced back to the common ancestor of chimpanzees and humans with a parallel development in Gorilla (or common ancestry for humans and gorillas with separate origin in Pan depending on the phylogenetic tree used), or that it has a separate origin in all three lineages. Through parsimony, the least likely scenario would be complete independent formation of the eccrine distribution, but it should be noted that in a species of tree shrew, an eccrine system has developed independently; and therefore independent origin is not at all impossible. Proceeding to a more parsimonious answer, the shared derived state of two lineages and the independent origin in one species, it is unclear if this is – in reality – more likely than complete three-way independent origin, as if the selective pressures under whose aegis this trait would develop was common to all lineages and developed independently in at least two lineages within a relatively short evolutionary timeframe, it may be no more likely for two origins versus three, though it is more parsimonious. The most parsimonious explanation is the single origin of the trait and the shared derived status of the eccrine distribution in the three African lineages. Parsimony leads to the most logical choice in terms of probabilities within a sterile mathematical phylogeny, but does not always provide the most likely scenario, which must be determined from the whole of available evidence.
The reduction of body hair thickness and length in humans versus chimpanzees and gorillas indicates that it is most parsimonious that hair reduction occurred in the human lineage after the Homo-Pan split. The relative density of hair follicles per unit of skin is approximately the same in humans, chimpanzees, and gorillas. This is not necessarily surprising since with increasing body size, and therefore increasing skin surface area, one would expect a reduction in density scaled to body size. When a species gains size it does not grow extra limbs, so why would it grow extra hair follicles? When put in the perspective of a human ancestral state, it is therefore likely that any ancestor of the three lineages back through the common ancestor of all three should have had a similar consistency in relative hair density, and any difference in perceived pelage density would be a result of individual hair thickness and length. From the description of the growth and development of hair in humans, it is clear that differences in thickness and length is the result of differing growth rates, differing lengths of the growth cycle, and possibly different proportions of time spent in each phase of the hair life cycle. Since a wide variation exists in the perceived body hair in modern humans (as well as between individuals among the chimpanzee and gorilla population), it seems likely that the relative “hairlessness” or “hairiness” of populations is extremely malleable under evolutionary pressures. Reduction in human body hair did not require a complicated, dangerous, or unlikely change to the genetic controls for hair growth, but simply normal natural selection on variation within the traits that control hair growth. Therefore, while the most parsimonious explanation of human body hair reduction is the occurrence after the Homo-Pan split, it is not inconceivable that it could have happened earlier with an associated “reversal” in one or both of the Pan or Gorilla lineages.
The placement and function of apocrine glands in the axillae and pubis occurs within humans, chimpanzees and gorillas. However, this does not occur in other closely related anthropoids such as the orangutan. Putting aside the question of the functional significance of this arrangement, there is a clear distinction between the pattern of apocrine gland distribution in the Homo-Pan-Gorilla clade and all other closely related anthropoids. This indicates that the ancestral state of the human lineage was likely the same, and that whatever adaptation or simple neutrally selected change that occurred in the past occurred prior to the split of these three lineages. While of very little thermoregulatory importance, this common functional and structural distribution is important in relation to the selection against general apocrine gland distribution throughout the body and an associated radiation of eccrine glands throughout these areas. Any theory that invokes a thermoregulatory explanation needs to address this issue as well as anything pertinent to the particular theory.
The distribution of a thick subcutaneous fat layer in modern humans is another “unique” feature in relation to closely related non-human primates, and should be considered in any theory that invokes a thermoregulatory explanation. However, this is hard to distinguish as either a true problem or a false one, as humans do not have any more points of fat production than other primates, but rather have more fat developed from these individual centers. This can be seen as simply an effect of the ability of humans to procure food sources much greater than their needs with relative ease. In this manner, the question of subcutaneous fat may be a red herring with regards to thermoregulation. Nevertheless, the important physiological impact of this fat layer would still impact any selective pressures for changes in the features related to thermoregulation, and as such, the fat layer and its features should at least be accountable under any particular theory.
Theories of Human Evolution and Thermoregulation
All large variety of hypotheses have been proposed that inherently deal with thermoregulation as either a first principle, or as a causally effected change due to some development. These arguments range from just-so stories to parsimonious explications to non-falsifiable single cause arguments used to explain a wide range of features. Each theory will be examined in terms of how well it deals with the points above about what would be the expectation(s) of an ancestral state, the validity of the explanations for various thermoregulatory features present in the human condition, and a general assessment of the relative reasonable nature of the hypotheses. Examination of the each theory is limited to the points relevant to thermoregulation.
Aquatic Ape Hypothesis: The Aquatic Ape Hypothesis belongs to the category of theories that Langdon calls “umbrella hypotheses”, meaning that such theories attempt to explain everything about the modern human condition and human evolution through primary cause explanations (Langdon 1997). This theory has its roots in an article published in New Scientist in 1960 by marine biologist Alister Hardy. Elaine Morgan developed the theory through the publication of several popular feminist books and essays in the 1970s, 1980s and 1990s (Morgan 1972, 1982, 1990). The basic premise of this theory is that a human ancestor at some stage went through a semi-aquatic phase during which the reduction of body hair, the development of subcutaneous fat, the development of eccrine glands, the reduction in apocrine glands, and about every other difference between modern humans and extant primates developed. This critique of this hypothesis will focus on the specific adaptations related to thermoregulation.
In general, the logic lines proceed in the following fashion. The thermoregulatory problems of a water environment is much different from that of a terrestrial environment, leading to a series of unique adaptive solutions in the human lineage with respect to closely related extant primate species. The reduction of body hair in humans occurred as a consequence of its uselessness for insulation in the water environment. This is a problematic statement especially when placed in the context of the aquatic ape as “semi-aquatic”, since hair loss in water mammals occurs as a result of a full time aquatic existence. Hair would be an effective insulator as long as air was trapped in the hair; it would only be as a consequence of extended periods in water that the hair would become completely soaked and useless for insulation. In fact, most semi-aquatic mammals (i.e. spend time on land as well as in the water) retain their body hair and have developed extremely dense fur cover. Full time aquatic mammals such as cetaceans and extremely large bodied semi-aquatics have lost or had greatly reduced body hair, but all these animals are large bodied, and similar sized terrestrial mammals often lose their body hair as well.
The presence of a subcutaneous fat layer in humans is explained as both an adaptation for insulation and for buoyancy. However, while this is functionally correct to some extent in humans, this idea lacks explanatory power in the face of equally likely or even more likely explanations that have been put forth to explain the same phenomenon. It is highly unlikely that in a fit individual the fat layer acts significantly in creating buoyancy as to be more or less selectively adaptive. As well, the thickness of the fat layer in a fit individual makes the utility of the fat layer in insulating from the high rate of convective heat loss with the water environment questionable. At best, the subcutaneous fat layer fits the aquatic model, but does not provide independent evidence for it, as there are equally likely explanations available.
The development of eccrine glands is explained by the aquatic model as a mechanism for removing excess salt from the body in an environment where salt would be plentiful and over consumed (a marine environment). This explanation is especially problematic. There are very few environments where both fresh water and salt are independently available in large quantities. Fresh water animals must conserve salt while marine animals must excrete excess salt from their bodies while retaining water. Dehydration in a marine mammal is as dangerous as in a terrestrial mammal. Morgan speculates that humans once secreted a hypertonic sweat relative to blood plasma in order to remove salt while retaining water (Morgan 1990). However, the aquatic model neglects to explain why human sweat is now hypotonic with active recovery of salt in the eccrine gland before sweat is secreted, in essence removing large amounts of water with relatively little salt content. This same quandary for the aquatic hypothesis is seen in the urine of humans, that is much more dilute than other mammals, meaning even more water is lost through the urinary system. Some adherents to the aquatic model have claimed that humans must have inhabited both marine and freshwater habitats simultaneously or successively (Verhaegen 1985), but this would now effectively involve the movement between three environments, making it even more unparsimonious when the terrestrial based theories can explain the same phenomena with only one major environmental milieu.
The function and distribution of eccrine glands is completely contradictory to the aquatic model. Attempts to fit the data to the model by appealing to unexplained and unproven evolutionary changes in the human line since the putative aquatic phase are not reasonable and not robust in any objective way of looking at the problem. The development of eccrine glands in humans for the removal of excess salt also is problematic in light of the eccrine distribution in chimpanzees and gorillas. There is no argument for an aquatic phase in these lineages, and yet they show an adaptation that supposedly developed in the human lineage for the express purpose of dealing with a problem unique to a marine existence. In addition, Morgan claimed that humans do not have an innate “salt hunger”, which would be expected in a species that developed in an environment where salt is in oversupply. However, this is clearly not the case, and humans show an extremely high sensitivity to declining salt intake (Denton 1982). The aquatic model does not fit more parsimoniously or reasonably with the available data than any theory explaining these features on the basis of the thermoregulatory advantage of sweating.
The aquatic model explains the reduction in apocrine glands over the body as a function of reduced sexual selection. The logic of this argument follows that apocrine glands secrete pheromones that would have been washed off in the water, and thus would have lost their significance in an evolutionary perspective. This explanation does not fit the actual distribution and function of apocrine glands in humans and other primates. It is true that humans have a reduced distribution and number of apocrine glands over the body; however, this trait is held in common with chimpanzees and gorillas. In addition, it is in these exact species where apocrine glands may have the function of sexual signals. Most other primates – including other closely related anthropoids such as orangutans – have specialized structures to release pheromones, such as sternal pits above the manubrium, rather than generalized apocrine glands distributed over the body that act as pheromone producing machines. In yet another instance the aquatic model does not realistically address the physiological realities related to structures involved in thermoregulation. Regardless of the parsimonious nature – or lack thereof – of other theories, the aquatic ape hypothesis does not in any shape or form adequately explain these features of the human thermoregulatory suite of mechanisms.
Bipedality: In the 1980s, a hypothesis was developed through a series of articles by Peter Wheeler that attempted to explain the human sweating mechanism and hairlessness through the first cause of bipedality (Wheeler 1984, 1985, 1991a, 1991b, 1992, 1994). Wheeler argued that a bipedal hominid would have a significant thermoregulatory advantage over a quadrupedal hominid (Wheeler applied the term “hominid” improperly to pre-hominid quadrupeds as well as hominids sensu stricto) due to the decreased exposure to direct and indirect solar radiation and would increase skin surface area to winds, which would increase the selective value of the sweating mechanism. This thermoregulatory advantage was argued to be great enough for natural selection to drive hominid evolution towards bipedalism. The loss of body hair and the development of an effective evaporative sweating mechanism were seen as adaptations that became advantageous only after the change from quadrupedalism to bipedalism. The primary problem with discussing this model is that Wheeler documented the thermoregulatory benefits of bipedalism to an already bipedal hominid, and then the thermoregulatory benefits were used as the cause of bipedality itself, with associated thermoregulatory mechanisms contingent upon bipedalism. This logic, like all circular reasoning is not clearly falsifiable in terms of an explanatory method, but only in terms of the truth the of the premise. In this case the premise (the thermoregulatory advantage of bipedalism through the reduction in direct solar radiation) is generally accepted, and therefore the hypothesis cannot be falsified in terms of the logic of the relationship. However, the argument for the cause of bipedality is not the purpose of this chapter, and this analysis will focus on the statement that bipedalism is a prerequisite for hair loss in the human lineage.
It is Wheeler’s proposition that bipedalism is an adaptation to a savanna environment where the thermoregulatory advantage of bipedalism would lead to its evolution. After this change it would be advantageous for the loss of body hair. Wheeler further claimed that in a naked biped there would be a reduction in the water requirements during normal metabolic activity on the African savanna. Since quadrupedalism offers no such advantage to an individual with reduced hair cover, bipedalism was a prerequisite for hair loss since it gave a thermoregulatory advantage that would have made hair loss selectively adaptive. However, there are several problems with the analysis and interpretation of data presented by Wheeler.
Wheeler’s calculations of heat load on a bipedal versus a quadrupedal animal are obfuscated by the use of the metabolic load of an inert animal rather than that of an animal in motion (Chaplin et al. 1994). This way of determining the metabolic component of heat stress in an animal effectively increased the hypothetical advantage of bipedalism in a hot environment by making the main determinant of heat stress environmental heat load. This means that any behavioral act would make no difference in the heat stress of an animal, leaving any hypothesis regarding what the selective thermoregulatory advantage of bipedalism in terms of behavior strongly supported. However, maximal exercise heat loads created by high metabolic output in medium-sized mammals (>10 kg) can reach up to ten times the level of the environmental heat load (Taylor 1977). This means that the relative advantage in terms of foraging time available to an animal through bipedal locomotion over quadrupedal locomotion is greatly reduced when compared to calculations based on an inert metabolism. While this greatly reduces the thermoregulatory advantage of bipedalism compared to what was claimed, there is still an advantage in the reduction of environmental thermal stress.
Bipedalism was claimed as a necessary pre-adaptation for the loss of body hair in the human lineage. However, the question was approached from the perspective of the relative advantage of bipedalism versus quadrupedalism for an animal with reduced hair cover. Since the calculations showed that a naked biped was more efficient than a naked quadruped, bipedalism is claimed as a necessary step prior to denudation. However, Wheeler neglected to address the question of whether a naked biped would have an advantage over a haired biped in his calculations. This question is implicitly answered with the comparison of thermoregulatory efficiency of a haired biped versus a haired quadruped and of a naked biped versus a haired quadruped. This comparison is faulty, because Wheeler assumes a maximum heat dissipation of 100 W/m2 for a haired primate versus a maximum of 500 W/m2 for a modern human. The human level of heat dissipation occurs through the use of evaporative sweating, which would have developed in concert with the loss of body hair, and thus cannot be used as the level of heat dissipation that would make developing bipedality advantageous. Also, the 100 W/m2 level is too low, since it is a measure from forest dwelling primates, and there is a savanna adapted primate that has a heat dissipation of over 200 W/m2 (Mahoney 1980). A scenario that makes bipedalism a necessary pre-adaptation for the loss of body hair means that bipedalism would have occurred in the absence of the development of the sweating mechanism, and thus the heat dissipation of this proto-biped would not have had the heat dissipative capabilities of a modern human.
When this factor is accounted for, the putative greater heat load capacity of a naked skin of a newly developed biped over denser body hair is negated. Amaral (1996) showed that the thermal stress on a naked biped is up to three times greater at higher temperatures than on a hair-covered skin. In fact, this is exactly the reason that other savanna primates have a dense coat of fur that is even more developed than forest dwelling primates do. It is only the fully developed modern human sweating capacity that effectively compensates for this, making the development of an efficient sweating mechanism a necessary pre-adaptation for hair loss. Amaral also pointed out that the advantage of bipedal stance in placing the body in the path of wind for evaporative heat loss is only advantageous if the temperature of the wind is lower than that of the body temperature of the animal. If the ambient air temperature is higher than body temperature, the exposure to wind will actual increase the heat load, even more so in a hairless individuals (Cabot Briggs 1975). This same problem is inherent in Wheeler’s assertion that bipedalism is more water efficient on the hot savanna environment if one has lost body hair cover. What Wheeler’s calculations show is that this is true only if the air temperature is lower than body temperature. In the hot savanna environment, where ambient air temperature will commonly exceed the 35°C of body temperature, the loss of hair provides no advantage over dense body hair. These points do not rule out the possible thermoregulatory advantages of bipedalism (however, claiming that because it was more thermoregulatory efficient it would develop is deterministic and similar thinking would mean that all other mammals in hot environments should be bipedal as well), but they do make the assertion that bipedalism is necessary for the loss of body hair unreasonable. If bipedalism develops as a consequence of its thermoregulatory advantage in hot climates with steep environmental heat stress, the loss of body hair subsequent to this should not happen due to the relative disadvantage of hair loss under steep environmental heat stress.
Sexual Selection: The idea that the loss of body hair in the human lineage was a result of sexual selection is a very old one. Darwin first presented the idea in 1871 (though a footnote in the text attributes the idea – inspiration? – to Reverend T.R. Stebbing) in a text on sexual selection, which Darwin believe worked in concert with natural selection in the evolution of species. Darwin’s original reasoning can be considered faulty in retrospect (“no one supposes that the nakedness of the skin is of any direct advantage to man; his body hair therefore cannot have been divested of hair through natural selection”), but some of his observations are still considered to hold true. For example, the facial hair of men has no functional or adaptive significance that can be explicated. It is a plausible explanation for the differences between male and female facial hair is sexual selection, or possibly the difference in body hair in males and females. However, invoking sexual selection as the cause of the loss of body hair in humans encounters several hurdles.
The presence of an effective sweating mechanism for evaporative heat loss is greatly dependent on the loss of body hair in humans. If sexual selection is ultimately responsible for the loss of body hair in humans, the sweating mechanism would have to have been selected for by a separate mechanism, even though the two features would have had to develop in concert for sweating to remain effective. This is not in any way impossible, as it might be argued that the sexual selection leading to hair loss would have allowed the selection for an efficient sweating mechanism. In this scenario, sexual selection would not even have to be the main driving force of the loss of body hair; it would at the very least only be required to initiate the reduction in body hair, at which point natural selection for the thermoregulatory efficiency of the sweating mechanism might drive the denudation of the human lineage. This is not unreasonable and therefore is an open option even when discussing the selective benefits of sweating as the driving force in the loss of body hair in humans. However, while it is an option that cannot be negated, the invoking of two selective pressures to explain a single phenomenon when one can explain it just as well makes the explanation less parsimonious, though not necessarily unlikely or unreasonable.
Vestiary Hypothesis: The vestiary hypothesis of human hair reduction maintains that human culture is responsible for the loss of human hair. The theory states that with the advent of increased brain size and increased intelligence, humans began to use fire and make clothing, making the selective maintenance of hair for the retention of heat redundant. Since there is no fossil evidence for hair, supporters say there is no reason to assume that the reduction in human hair happened long in the past. However, this idea ignores the basic properties of thermoregulation, and does not explain why, even if hair was redundant, it would be selected against in humans. This idea also does not address the issue of the eccrine and apocrine distribution of the Great Apes, which mirrors that of modern humans (Hamilton 1973; Kushlan 1985). Many researchers do not seriously accept this theory, as it has not been well developed and ignores many of the physiological features involved in the loss of body hair and the need to explain those features.
Hunting Hypothesis: In the 1960s most models of human evolution focused on the importance and evolutionary significance of hunting. Ardrey (1976) put a name to the idea of hunting as a causal factor in the loss of human hair, and laid out the arguments that had been floating about since at least the early 1960s. The line of reasoning for the loss of body hair and the development of an evaporative sweating mechanism – in general – went in the following manner. A change in habitats through climatic shifts led to a shrinking ecological niche for the ancestors of humans. The human line left the forest and took to the savanna to exploit new environmental opportunities. This change in ecology led to a change in diet that proceeded to add insects, eggs, and small animals to the nuts and fruit they previously had been supported by in the forest environment. This dietary change progressed until they were hunting mammals. Hunting required a bipedal stance in order the use of the hands for tools and weapons, and the thermoregulatory stresses of the hunt required the development of the sweating mechanism (which in turn required the loss of body hair to be effective) to prevent overheating in the hot savanna environment. Eventually this sequence of events led to every adaptation of the human lineage, making hunting the defining variable in human evolution (Ardrey 1976; Morris 1967; Brace and Montagu 1965; Montagu 1964).
There are some obvious problems with this argument that make the development of hunting exceedingly unlikely as a primal cause for the development of the modern human thermoregulatory system. First, no matter what the physical evidence for the development of these features, this is a just-so story that was based on speculation first, with evidence forced to fit it second. This idea was developed in a time where the nebulous concept of “the savanna” was not clearly defined. Indeed, the available evidence seems to indicate that the general environment of hominids was marginal forested areas rather than open dry plains (Sikes 1994; WoldeGabriel et al. 1994; Kingston et al. 1994). In addition, the idea of hunting versus scavenging was not much of a consideration at that point in time, eventually became a hotter issue in the 1980s, and has eventually died down with most of the available experimental data supporting scavenging as the mode of hominid animal protein procurement until much later then earlier thought (Blumenschine 1995; Dominguez-Rodrigo 1997).
As another “umbrella hypothesis” the hunting hypothesis cannot be completely falsified (making its inclusion as an evolutionary hypothesis particularly useless in a variety of ways), but the preconditions that would favor the loss of body hair and the development of a sweating mechanism (high environmental or metabolic thermal stress relative to an earlier state) can be shown to occur prior to when hunting can be reasonable inferred as a likely possibility. A radiation of hominids circa 2 mya indicates a shift into new environments (and the capacity to survive in these environments), and an even earlier increase in body size would indicate increased metabolic stress with the expense of less efficient heat loss through convection with the air. These are events that likely would have involved the need for the development of thermoregulatory changes much earlier than any reasonable evidence of hunting.
Thermoregulation is an important consideration in human evolution. Humans have evolved an extremely efficient mechanism for evaporative cooling that is not seen in other closely related primate species. The development of this sweating mechanism has associated changes in the glandular system of the human skin and the length and thickness of body hair. These changes make sense in terms of a coevolved system of thermoregulation to reduce heat stress on a large bodied mammal in a hot environment. Any evolutionary hypothesis that attempts to make sense of the thermoregulatory mechanisms discussed above must deal with each of these traits and their distribution in other anthropoids besides humans. A theory that explains the development of trait present in both humans and the African Apes only in the context of unique human evolution and adaptation lacks explanatory power.
Allen, J.A. 1877. “The influence of physical conditions on the genesis of the species.” In Rad. Rev., vol. 1, pp. 108-140.
Amaral, L.Q. 1996. “Loss of body hair, bipedality and thermoregulation. Comments on recent papers in the Journal of Human Evolution.” In Journal of Human Evolution, vol. 30, no. 4, pp. 357-366.
Ardrey, R. 1976. The Hunting Hypothesis. New York: Bantam Books.
Bass, D.E., C.R. Kleeman, and M. Quinn. 1955. “Mechanisms of acclimatization to heat in man.” In Medicine (Baltimore), vol. 34, pp. 323-80.
Bergmann, C. 1847. “Uber die verhaltniesse der warmeokonomie der thiere zu ihrer grosse.” In Göttingen Stud., vol. 1, pp. 393-708.
Blumenschine, R. 1995. “Percussion marks, tooth marks, and experimental determinations of the timing of hominid and carnivore access to long bones at FLK Zinjanthropus, Olduvai Gorge, Tanzania.” In Journal of Human Evolution, vol. 29, pp. 21-51.
Brace, C.L. and A. Montagu. 1965. Human Evolution. New York: Macmillan.
Cabot Briggs, L. 1975. “Environment and human adaptation in the Sahara.” In Physiological Anthropology, ed. by A. Damon, pp. 93-129. New York: Oxford University Press.
Chatterjee, C.C. 1979. “Human physiology.” In Med. Allied Agency, vol. 2, pp. 2-3.
Darwin, C. 1871. The Descent of Man and Selection in Relation to Sex. New York: Modern Library.
Denton, Derek. 1984. The Hunger for Salt: An Anthropological, Physiological, and Medical Analysis. Berlin: Springer-Verlag.
Dominguez-Rodrigo, M. 1997. “Meat-eating by early hominids at the FLK 22 Zinjanthropus site, Olduvai Gorge (Tanzania): An experimental approach using cut-mark data.” In Journal of Human Evolution, vol. 33, no. 6, pp. 669-690.
Ebling, J. 1985. “The mythological evolution of nudity.” In Journal of Human Evolution, vol. 14, pp. 33-41.
Gray, H. 1974. Gray’s Anatomy: Anatomy, Descriptive and Surgical. Philadelphia: Running Press.
Hamilton, W.J., III. 1973. Life’s Color Code. New York: McGraw-Hill Book Company.
Hanna, J., and D. Brown. 1983. “Human heat tolerance: An anthropological perspective.” In Annual Review of Anthropology, vol. 12, pp. 259-284.
Hanson, G., and W. Montagna. 1962. “The skin of primates. XII. The skin of the owl monkey (Aotus trivirgatus).” In American Journal of Physical Anthropology, vol. 20, pp. 421-430.
Hardy, A. 1960. “Was man more aquatic in the past?” In New Scientist, March 17, pp. 642-645.
Holbrook, K.A., and S.I. Minami. 1991. “Hair follicle embryogenesis in the human: Characterization of events in vivo and in vitro.” In Annals of the N.Y. Academies of Science, vol. 642, pp. 167-196.
Holliday, T.W. 1995. “Ecogeographical patterning in body form: Neontological perspectives.” In Body size and proportion in the Late Pleistocene Western Old World and the origins of modern humans, Ph.D. dissertation chapter, pp. 44-66.
Kingston, J.D., B.D. Marino, and A. Hill. 1994. “Isotopic evidence for Neogene hominid paleoenvironments in the Kenya Rift Valley.” In Science, vol. 264, pp. 955-959.
Kligman, A.M. 1988. “The comparative histopathology of male-pattern baldness and senescent balding.” In Clinical Dermatology, vol. 6, no. 4, pp. 108-118.
Kushlan, J.A. 1985. The vestiary hypothesis of human hair reduction.” In Journal of Human Evolution, vol. 14, pp. 29-32.
Langdon, J.H. 1997. “Umbrella hypotheses and parsimony in human evolution: a critique of the Aquatic Ape Hypothesis.” In Journal of Human Evolution, vol. 33, no. 4, pp. 479-494.
Lind, A.R., and D.E. Bass. 1963. “Optimal exposure time for development of acclimatization to heat.” In Fed. Proc., vol. 22, pp. 7.
Mahoney, S.A. 1980. “Cost of locomotion and heat balance during rest and running from 0 to 55°C in a patas monkey.” In Journal of Applied Physiology, vol. 49, pp. 789-800.
Montagna, W. 1976. “General review of the anatomy, growth, and development of hair in man.” In Biology and Disease of the Hair, ed. by. K. Toda, pp. xxi-xxxi. Baltimore: University Park Press.
Montagna, W. 1985. “The Evolution of Human Skin (?)” In Journal of Human Evolution, vol. 14, pp. 3-22.
Montagu, A. 1964. “Natural selection and man’s relative hairlessness.” In Journal of the American Medical Association, vol. 187, pp. 356-357.
Moretti, G., E. Rampini, and A. Rebora. 1976. “The hair cycle re-evaluated.” In International Journal of Dermatology, vol. 15, no. 4, pp. 277-285.
Morgan, E. 1972. The Descent of Woman. New York: Bantam.
Morgan, E. 1982. The Aquatic Ape. New York: Stein and Day.
Morgan, E. 1990. The Scars of Evolution. New York: Oxford University Press.
Morris D. 1967. The Naked Ape. New York: Dell Publishing Co.
Napier, J.R., and P.H. Napier. 1994. The Natural History of Primates. Cambridge: The MIT Press.
Perkins, E.M., and H. Machida. 1967. “The skin of primates. XXXIV. The skin of the golden spider monkey (Ateles geoffroyi).” In American Journal of Physical Anthropology, vol. 26, pp. 35-43.
Riesenfeld, A. 1981. “The role of body mass in thermoregulation.” In American Journal of Physical Anthropology, vol. 55, pp. 95-99.
Robertshaw, D. 1985. “Sweat and heat exchange in man and other mammals.” In Journal of Human Evolution, vol. 14, pp. 63-73.
Ruff, C.B. 1991. “Climate and body shape in hominid evolution.” In Journal of Human Evolution, vol. 21, pp. 81-105.
Sanyal, D.C., and N.K. Maji. 2001. “Thermoregulation through skin under variable atmospheric and physiological conditions.” In Journal of Theoretical Biology, vol. 208, pp. 451-456.
Schultz, A.H. 1931. “The density of hair in primates.” In Human Biology, vol. 3, pp. 303-321.
Schwartz, G.G. and L.A. Rosenblum. 1981. “Allometry of primate hair density and the evolution of human hairlessness.” In American Journal of Physical Anthropology, vol. 55, pp. 9-12.
Senay, L.C., D. Mitchell, and C.H. Wyndham. 1976. “Acclimatization in a hot, humid environment: body fluid adjustments.” In Journal of Applied Physiology, vol. 40, no. 5, pp. 786-96.
Sikes, N.E. 1994. “Early hominid habitat preferences in East Africa: Paleosol carbon isotopic evidence.” In Journal of Human Evolution, vol. 27, pp. 25-45.
Taylor, C.R. 1977. “Exercise and environmental heat loads: different mechanisms for solving different problems?” In (D. Robertshaw, Ed.) Environmental Physiology II. (Int. Rev. Physiol. vol. 15), pp. 119-146.
Van Horn, R. 1970. “Vibrissae structure in the Rhesus monkey.” In Folia Primatologia, vol. 13, pp. 241-285.
Verhaegen, M.J.B. 1985. “The aquatic ape theory: evidence and a possible scenario.” In Med. Hypotheses, vol. 16, pp. 17-32.
Wheeler, P.E. 1984. “The evolution of bipedality and loss of functional body hair in hominids.” In Journal of Human Evolution, vol. 13, pp. 91-98.
Wheeler, P.E. 1985. “The loss of functional body hair in man: the influence of thermal environment, body form and bipedality.” In Journal of Human Evolution, vol. 14, pp. 23-28.
Wheeler, P.E. 1991. “The thermoregulatory advantages of hominid bipedalism in open equatorial environments: the contribution of increased convective heat loss and cutaneous evaporative cooling.” In Journal of Human Evolution, vol. 21, pp. 107-115.
Wheeler, P.E. 1991b. “The influence of bipedalism on the energy and water budgets of early hominids.” In Journal of Human Evolution, vol. 21, pp. 117-136.
Wheeler, P.E. 1992. “The influence of the loss of functional body hair on the water budgets of early hominids.” In Journal of Human Evolution, vol. 23, pp. 379-388.
Wheeler, P.E. 1994. “The thermoregulatory advantages of heat storage and shade seeking behavior to hominids foraging in equatorial savanna environments.” In Journal of Human Evolution, vol. 24, pp. 13-28.
WoldeGabriel, G., T.D. White, G. Suwa, P. Renne, J. de Heinzelin, W.K. Hart, and G. Helken. 1994. “Ecological and temporal placement of early Pliocene hominids at Aramis, Ethiopia.” In Nature, vol. 371, pp. 330-333. | http://www.modernhumanorigins.com/anth501.html | 13 |
54 | The domesticated cow is the latest farm animal to have its genome sequenced and deciphered. The members of the Bovine Genome Consortium have published a series of papers on the assembly and what the sequence reveals so far about the biology of this ruminant and the consequences of its domestication.
Cattle belong to an ancient group of mammals, the Cetartiodactyla, that first appeared around 60 million years ago. Domesticated cattle (Bos taurus and Bos taurus indicus) diverged from a common ancestor 250,000 years ago, and have had a long and rich association with human civilization since Neolithic times 8,000–10,000 years ago. All modern cattle breeds originate from large populations of the ancestral aurochs (Bos taurus primigenius; Figure 1) through thousands of years of domestication. During this time, more than 800 cattle breeds have been established, representing an important resource for understanding the genetics of complex traits in ruminants. More than a billion cattle are raised annually worldwide for beef and dairy products, as well as for hides. Cattle therefore represent significant scientific opportunities, as well as an important economic resource.
Figure 1. A picture of the ancestral aurochs (Bos taurus primigenius) taken from Brehms Tierleben (picture from Wikipedia).
Sequencing of the cattle genome began in December 2003, led by Richard Gibbs and George Weinstock at the Baylor College of Medicine's genome sequencing center in Houston, Texas, USA. The first draft sequence of the bovine genome was based on DNA taken from a Hereford dam, L1 Dominette 01449 (Figure 2), a cattle breed used in beef production. In parallel, a large number of single-nucleotide polymorphisms (SNPs) have also been generated from the partial sequence of six breeds (Holstein, Angus, Jersey, Limousin, Norwegian Red and Brahman). Taken together with the sequence of L1 Dominette 01449 (the reference bovine genome ) these represent a valuable resource for marker-assisted selection of genetic traits in commercial breeding programs.
Figure 2. This Hereford cow, known as L1 Dominette 01449, provided scientists with the first genome sequence for cattle.
The Bovine Genome Project represents a complex collaborative effort between multiple groups and funding from the United States, Canada, France, United Kingdom, New Zealand and Australia.
Undoubtedly the current bovine genome sequence will be improved in both its sequence coverage and its annotation, but this draft sequence will form the basis for cattle genetics and genomics for the next 20 years or more.
So what have we learned?
The genome assembly problem – still not solved?
The technology for generating raw sequence data has advanced rapidly over the past 35 years, starting with Sanger sequencing in the 1970s, automated fluorescent Sanger sequencing in the 1980s and, recently, ultra-high- throughput methods based on the parallel sequencing platforms produced by 454, Illumina, and ABI. However, the scale of these advances has not been matched by new algorithms and tools for sequence assembly, particularly for large genomes. Common problems associated with large genomes have been repetitive sequences (generally around 50% of a vertebrate genome), gene families and genetic polymorphisms, all of which can cause errors in assembly. Genome assembly is still a problem, requiring a combination of parallel computing and hard work from teams of manual annotators, and there is a need for a step change in the algorithms and approaches used to assemble a sequence. The bovine genome is the latest in a series of large-scale sequencing projects based on the conventional automated Sanger methods. It illustrates many of these problems and provides some solutions [2,3].
There are two bovine genome assemblies: BCM4 from Baylor College and UMD2 from the University of Maryland. Both assemblies are based on the sequence data generated by the Baylor genome sequencing center. How do they compare? Which is the more accurate?
BCM4 is the latest assembly from a series – BCM1 (2004), BCM2 (2005), and BCM3.1 (2006) – which claims to be more accurate, with greater coverage and fewer misassemblies than before. The earlier inaccuracies were due to the assemblies having been largely based on whole-genome shotgun (WGS) data alone: because of the sizes of the fragments generated in WGS sequencing, this is highly prone to errors caused by the repeated sequences that pose a significant problem in genome assemblies. BCM4 by contrast was assembled by combining WGS reads (sequencing of 30 million reads) with the reads and fingerprinted contig (FPC) maps of large genomic inserts cloned into bacterial artificial chromosomes (BACs). The large inserts allow the smaller fragments to be correctly assembled with fewer mistakes due to repetitive sequences . The WGS reads ensure coverage of the whole genome. In addition, through the development of a new assembler (Atlas) the Baylor team was able to integrate these sequences with other data, from FPC BAC maps, genetic maps and chromosome assignments. The sequence data themselves were based on a sire and daughter, mostly on the daughter's DNA. Therefore, the coverage of the sex chromosomes X and Y is not as good as that of the autosomes, especially in the case of the Y chromosome, of which only a small amount of DNA was available from the single Y chromosome of the sire, whereas the two animals together provided three X chromosomes (and of course four of each of the autosomes) [2,3].
For BCM4, more than 90% of sequences have been assigned to a specific chromosome and total sequence assembled is 2.54 Giga base-pairs (Gbp). On the basis of overlaps with 1.04 million expressed sequenced tags (ESTs), the gene coverage is estimated at 95%. Comparisons between 73 fully sequenced BAC clones showed few misassemblies and more than 92% coverage. Finally, 99.2% of 17,482 SNPs have been mapped correctly onto the BCM4 assembly. The sequence of the bovine MHC (BoLA) provides a critical test of accuracy , as it contains many polymorphic gene families densely clustered on chromosome 23 and automated genome assembly software is prone to errors of deletion and duplication in such regions. The paper by Brinkmeyer-Langford et al. shows extremely good agreement between the radiation hybrid (RH) map derived by mapping DNA markers from this region on RH panels and the BCM4 sequence assembly.
The University of Maryland's assembly, UMD2, is based on the same raw data as BCM4 and integrates a wider range of external data to improve and validate the final sequence assembly . In particular, it uses comparison between the cattle and human genome sequences to orientate or place cattle contigs when the data from the cattle genome alone cannot. It has therefore been able to assemble more sequence (2.86 Gbp, with 91% of sequences assigned to a specific chromosome and some of the Y), with fewer gaps (for example UMD2 assigned 136 Mb to the bovine X chromosome and BCM4 only 83 Mb), fewer misassemblies and with SNP errors corrected (BCM4 may have threefold more errors than UMD2).
Accuracy was also improved in the UMD2 assembly by paired-end reads for regions containing segmental duplications, gene families and gene polymorphisms, where assembly is particularly error-prone. In a paired-end read, about 500 bp are sequenced at each end of a large BAC insert to place the insert on the genome map. If the length of the BAC insert fails to correspond to the distance between the sequences matching the two ends of the insert on the genome assembly, then a duplication or a deletion must have been introduced in the assembly. As a result of this analysis, the UMD2 group report only 662 segmental duplications compared with 3,098 for BCM4. Duplications can be due to copy-number variation, a focus of much current interest because of its association, in different cases, with genetic disease and with disease resistance. However, quantification of WGS reads in these regions did not suggest any over-representation that might indicate increased copy number. WGS should be over- or under-represented in the corresponding BCM4 sequences where the two assemblies disagree, and this should clearly be checked.
The use by the UMD2 assembly of comparative maps between cattle and human allowed more sequence to be assembled, but somewhat undermines conclusions based on human-bovine sequence comparisons. The data can, however, now be used to highlight potential problem areas or predict specific arrangements and guide more sequencing to generate bovine data to confirm these predictions. These studies will presumably go ahead in the coming months at Maryland, Baylor and elsewhere.
What these assemblies also illustrate is the benefit of and need for community support for the final success of a genome project. The cattle community provided DNA samples of breeds, chromosome assignments of specific contigs, genetic linkage maps, BAC and FPC BAC maps, EST libraries for gene prediction and genome annotations for gene and protein predictions. However, the integration of datasets from multiple sources posed a substantial challenge for the bioinformaticians at Baylor College and Maryland in the absence of the genome sequence as a reference point.
Finally, we should ask what we can expect in the future. The availability of ultra-high-throughput sequence technologies will provide more raw sequence data, which could be used to fill in gaps, for example in regions not cloned in the current assembly. The extra reads would also increase the quality and number of SNPs detected by comparing several breeds, and increase the accuracy of sequence divergence and diversity estimates by providing some assurance that apparent SNPs are really SNPs and not sequencing errors.
The availability of a cattle genome sequence with more than 95% coverage is an excellent resource for comparative and evolutionary biologists. In addition, physiologists and biochemists will be interested in the unique biology of ruminants specialized for converting low-grade forage into energy-rich fat, milk and muscle.
Elsik and colleagues have led the way to annotate the genome, to give it meaning in terms of genomic structure, genes and proteins. This was achieved using a combination of automated pipelines and 4,000 manual annotations, which were made as part of a 'Bovine Annotation Jamboree' as well as by dedicated teams of annotators. Analysis predicted 26,835 genes, of which 82% were validated from external data sources. This suggests that the bovine genome encodes at least 22,000 genes, which is broadly in line with gene counts in all other mammals. In addition, 496 microRNAs were detected, including 135 novel sequences.
Multiple species comparisons between the cow and other mammals define a core set of 14,345 orthologous genes, 1,217 of which are specific to placental mammals and missing in marsupials and monotremes. Comparative mapping with other mammalian genomes defines 124 evolutionary breakpoints, mostly associated with repetitive sequences and segmental duplications. Interestingly, genes associated with lactation and immune responses are also associated with these breakpoints. Does this suggest a selective advantage or simply a mechanism for expanding these gene families?
Comparisons between human and bovine coding regions aimed at identifying genes under strong selection define 2,210 genes with elevated dN/dS ratios (a measure of selective constraint on proteins). Seventy-one genes have dN/dS >1, and among these, not surprisingly, genes with roles in reproduction, lactation and fat metabolism are over-represented [1,5,6]. More surprisingly, they include genes encoding proteins of the immune system. These are the genes that distinguish the ruminants from other mammals, and may reflect special needs of ruminants, which retain the low-grade food they ingest, along with any associated pathogens, for up to a day in the rumen before releasing it into the intestines from which infectious organisms are readily expelled.
One of the novel features of the Bovine Genome Project has been to use the sequence to examine the evolution and process of domestication of cattle. The aims of these studies were to uncover more about phylogenetic relationships amongst the Bovinae and the importance of natural and artificial selection, and to identify genes or genomic regions that have been critical in the domestication process – the so called 'signatures of selection'.
The divergence of the Bovinae (antelope, buffalo and cattle; Figure 3) over a relatively short period makes it difficult to determine a robust phylogeny for this group. MacEachern et al. have exploited cattle genomic sequences to design primers to amplify across a wide range of species, 16 in total. Sequence comparison of 30,000 sites from all species identify 1,800 variable sites. However, 111 sites are ambiguous in all trees because of apparently multiple substitutions whose ancestry cannot readily be traced. Fifty-three of these ambiguous, or aberrant, sites are segregating within the Bovina (cattle, bison and yak) and Bubalina (Asian and African buffaloes) lineages, which diverged from their common ancestor 5–8 million years ago (Mya). Further investigation has suggested that these are ancient polymorphisms, because they are associated with very small haplotypes. The other possible explanation for aberrant sites is hybridization between species, but this would be characterized by more extensive haplotypes, reflecting exchanges during meiotic recombination. This in turn would suggest that ancestral populations were very large, probably with effective breeding sizes of 90,000 or more , because large numbers of aberrant sites would not be expected to survive in a small population (this is consistent with the extremely abundant fossil record). The distribution of these ancient polymorphisms into species-specific lineages would then be a matter of chance. The other aberrant sites probably arose independently in the ancestors of the Bovina, 2–3 Mya, again from large breeding populations. These findings are novel and show that genetic polymorphisms present 2–8 Mya are still segregating in many present-day lineages.
Figure 3. Bovinae have diverged into (a) cattle (b) antelope and (c) buffalo over a relatively short time period. (a) shows a domesticated cow (Bos taurus) (photograph by Daniel Schwen, Wikipedia), (b) is the Common Eland (Taurotragus oryx) (Ablestock) and (c) is a Cape Buffalo (Syncerus caffer) (Ablestock).
The large number of aberrant sites in the Bovinae probably explain how the yak came to be reported, erroneously, as a close phylogenetic relative of cattle: many of these sites are shared by the two species. However, when only unambiguous sites are examined, the resulting phylogeny has three main groups: domestic cattle, bison/yak and banteng (Figure 4). The phylogeny is star-like, suggesting rapid evolution in a relatively short time of 1–3 million years , a period too short for reliable identification of points of divergence.
Figure 4. A phylogeny using unambiguous sites in the Bovinae results in three main groups: cattle, bison (a) and sister group, yak (b) and banteng (c). (a) shows North American Bison (Bison bison), (b) Yak (Bos grunniens) and (c) Banteng (Bos javanicus). All photographs are from Ablestock.
Genome biology and domestication
From the analysis of ancestral mutations , it appears that domesticated cattle populations are able to maintain a high load of unfavorable mutations. This is probably a consequence of the domestication process itself. The selection of specific cattle breeds has been through many small populations, and thus bottlenecks, which may favor the chance survival of unfavorable alleles. Survival of potentially deleterious alleles will of course be further favored by strong artificial selection: for example, the double-muscling genes favored for beef production would almost certainly be lost in the wild through natural selection.
Like other genome projects, the cattle project also has a parallel SNP discovery pipeline . The reference Hereford genome has been compared with six other breeds, with the identification of 37,470 SNPs polymorphic in all breeds. An immediate practical outcome of this SNP project is the definition of a set of 50 SNPs that could be used for unique parentage assignment and proof of identity.
Recently (in the last 10,000 years), population sizes have fallen sharply to small numbers, with many bottlenecks due to domestication and artificial selection for milk and beef. The decline in diversity seen in some breeds is a matter for concern. But even in these contracted populations, the pattern of linkage disequilibrium suggests that cattle started from a very large base 1–2 Mya with ancestral populations of 90,000 or more .
Various measures of genomic selection have been used (iHS, FST and CLR) to map regions of selective sweep on chromosomes 2, 6 and 14 . Selective sweep is the term used for the presence of genes on either side of a selected gene that are unusually conserved by virtue of their linkage to the selected gene. These regions in the bovine genome are, not surprisingly, associated with genes with a function in muscling (MSTN), milk yield and composition (ABCG2) and energy homeostasis (R3HDM1, LCT). The evidence of selection in these regions correlates with genes associated with efficiency of food utilization, immunity and behavior. It is possible that under domestication, mutations at these genes have been selected to produce animals more able to resist the infectious diseases prevalent in herds and showing the docile behavior suited to human husbandry .
DWB is supported by the Biotechnology and Biological Sciences Research Council and the University of Edinburgh.
Zimin AV, Delcher AL, Florea L, Kelley DA, Schatz MC, Puiu D, Hanrahan F, Pertea G, Van Tassell CP, Sonstegard TS, Marçais G, Roberts M, Subramanian P, Yorke JA, Salzberg SL: A whole-genome assembly of the domestic cow, Bos taurus.
BMC Systems Biol 2009, 3:33. BioMed Central Full Text
MacEachern S, Hayes B, John McEwan J, Goddard M: An examination of positive selection and changing effective population size in Angus and Holstein cattle populations (Bos taurus) using a high density SNP genotyping platform and the contribution of ancient polymorphism to genomic diversity in domestic cattle.
MacEachern S, McEwan J, McCulloch A, Mather A, Savin K, Goddard M: Molecular evolution of the Bovini tribe (Bovidae, Bovinae): is there evidence of rapid evolution or reduced selective constraint in domestic cattle? | http://jbiol.com/content/8/4/36 | 13 |
58 | Computers are devices which automatically perform calculations. One type of computer is the electronic stored-program digital computer, and this is what people commonly think of when they think of a computer.
If a computer is not electronic, it might use electrical relays to accomplish the same switching operations as the logic gates which are built from transistors in an electronic computer. While this makes a large difference in terms of bulk, cost, and speed, conceptually it makes little difference to understanding the computer as an abstract device to process information.
In the early days of computers, there was no fully satisfactory way to store reasonably large quantities of information for access at speeds comparable to those of the computer. Thus, a possible configuration for a very early computer might include an internal memory of about thirty-two registers, each able to contain a number, and each available for random access with direct addressing, and built from the same types of component as used for the logic gates of the computer, along with some sort of sequential-access information source for programs and larger quantities of data. That information source might still be considered memory, such as a magnetic drum memory or a delay line, or it might be an input device, such as a paper tape reader or a punched-card reader. Or a computer might even have been programmed using patch cords. Depending on various design details, such an early computer might have had a greater or lesser departure from the way today's computers operate, but many of the same principles would still apply.
It is when one goes from digital computers to analog computers that one gets a completely different type of machine; one that can find approximate solutions to a number of mathematical problems, including ones that were very difficult for digital computers to deal with. But such computers could not handle multi-precision arithmetic, or work with text, or otherwise act as fully general-purpose computing devices.
These pages present a design for an electronic stored-program digital computer. It is a large and complex design, perhaps more so than any computer that has actually been implemented. This is true despite the fact that the design starts from a relatively simple architecture which serves as its basis.
This is because the simple architecture is then extended in a number of ways.
Supplementary instructions are added for the purpose of operating on arrays of numbers in a single operation, and for performing complex operations predefined by the user on character strings.
Various modes of operation are provided. Some of these are provided to permit the computer to perform more calculation steps in response to the fetching of a given number of words of program code from memory, others to broaden the available addressing modes and make the available set of instructions more uniform, and others to give access to some extended features of the machine.
Also, flexibility in data formats is provided, and one of the instruction modes in particular (general register mode) is designed, to facilitate emulation of other architectures. Given that market forces have limited the number of available architectures, one of my goals in specifying this architecture is to show how a microprocessor could be designed, without sacrificing efficiency, to permit programmers with different architectural preferences to choose a mode of operation accomodating those preferences from those available on a single microprocessor chip.
It is from these features that the complexity of the architecture is derived.
But the basic part of the computer's architecture is simple. Although it has more registers available to the programmer than illustrated below, this diagram shows the most important registers of the computer:
Eight registers are Arithmetic/Index Registers. These are the registers used when the computer performs calculations on binary integers. They are 32 bits long, since that is usually as large as a binary integer used in a calculation needs to be. They are also used optionally in address calculation: an indexed instruction, where the contents of an index register are added to the address, lets the computer refer to elements of an array. Integers are treated as signed two's complement quantities.
Eight registers are Floating-Point Registers. These are used when the computer performs calculations on numbers that might be very small or very large. These registers are 128 bits long. Usually, a 64-bit floating-point number is more than large enough, but while it is possible to make two integer registers work together on a larger number, this is not as easy for floating-point numbers because they are divided into an exponent field and a mantissa field. The term mantissa refers to the fractional part of the logarithm of a number; while its use with floating-point numbers is criticized by some as a misnomer, the mantissa of a floating-point number represents the same information about that number as would be contained in the mantissa of its logarithm: the part that needs to be multiplied by an integer power of the base to produce the actual number.
Eight registers are Base Registers. These registers may be 64 bits long if 64-bit addressing is provided. They are used to help the computer use a large amount of available memory while still having reasonably short instructions. A base register contains the address of the beginning of an area in memory currently used by the program running on the computer. Instructions, instead of containing addresses that are 32 bits or even 64 bits long, contain 16 bit addresses, accompanied by the three-bit specification of a base register.
Since a 32-bit address pointing to a particular byte allows reference to four gigabytes of memory, and many personal computers today have memory capacities on the order of 512 megabytes, it is entirely reasonable to expand the possible address space beyond 32 bits. But expanding it to 64 bits allows a memory four billion times as large as four gigabytes. Is that far larger than could ever be useful?
Avogadro's number is 6.022 times ten to the twenty-third power; it is (approximately) the number of hydrogen atoms that weigh a gram.
Thus, ten to the twenty-fourth power carbon atoms weigh 20 grams, and ten to the twenty-fourth power iron atoms weigh 94 grams. And, since two to the tenth power is 1,024, that means two to the eightieth power carbon atoms weigh about 25 grams.
A virtual address is often intended to refer to a location in a memory much larger than the actual physical random-access memory of a computer, because the additional memory locations can refer to pages swapped out to a magnetic storage device, such as a hard disk drive.
About 120 grams of iron would allow storage of two to the sixty-fourth power bits of information, even if it required 64,000 iron atoms to store a single bit. Thus, two to the sixty-fourth power bytes, or 16 million yottabytes, require less than a kilogram of iron to be stored at that efficiency. Thus, storage of that much information doesn't flatly contradict the laws of physics. 128-bit addressing, on the other hand, is likely to be safe from any demand for improvement for a very long time.
The instruction formats of this architecture owe a great deal of their inspiration to the IBM System/360 computer, although some other details were influenced by the Motorola 68000 microprocessor. The resulting instruction format, dividing a 16-bit halfword into a 7-bit opcode followed by three 3-bit fields indicating registers, is similar to that of the Cray-1 supercomputer; this had no direct role in inspiring the original basic design of this architecture, but the additional vector register capabilities of this architecture are very much inspired by that computer and its successors.
Although the arithmetic/index registers and the base registers may be thought of as corresponding, respectively, to the data and address registers of the Motorola 68000 architecture, there is one important difference: 68000 addressing modes allow one to choose whether to use a data register or an address register for indexing; here, only the arithmetic/index registers may serve as index registers.
The architecture described here can be thought of an attempt to construct the most powerful processing unit possible, by incorporating every feature ever used in computer design in attempts to make larger and more powerful computers.
Since the value of a computer is largely determined by the software it can run, the number of different computer architectures for which an abundant selection of software is available is likely to be limited; thus, offering the full range of computer capabilities within a single architecture allows choice and diversity to be retained. While some models of the IBM System/360 series had some vector arithmetic capabilities (these capabilities, in fact, inspired the external vector coprocessing abilities of this architecture), the VAX 6500 and some other models of VAX are almost unique in offering both decimal arithmetic and Cray-style vector capabilities.
The complexity of the full design described here allows a wide range of aspects of computer architecture to be illustrated. As well, because not all problems can be parallelized fully enough to be split up between different processors, taking advantage of the continuing improvement in computer technology first by building the fastest possible single processor and only subsequently by using as many processors as one can afford does make sense. Thus, the design illustrated here, although it includes some features which involve parallelism, is not an attempt at resolving the problem that has been termed the von Neumann bottleneck. The classic von Neumann stored-program computer architecture involves one central processor that does arithmetic, making use of information stored in a large memory. Thus, most of the information in the memory lies idle, while computations take place involving only a small number of items of data at a time.
Neural nets, cellular automata, and associative memories represent types of computing machines where the entire machine is engaged in computation. The trouble with these extreme solutions to the problem is that they are helpful only in certain specialized problem domains. On the other hand, a von Neumann computer with a magnetic drum memory or a magnetostrictive delay line memory is also using all its transistors (or vacuum tubes!), the potential building blocks of logic gates, to perform computation, so perhaps the wastefulness is merely the result of memory cells being comparable to logic gates in current integrated circuit technology.
A less extreme solution that has been suggested is building memories which include a small conventional microprocessor with every 4K words or every 32K words of RAM. One obstacle to this is standardization. Another is whether this would be worth the effort. It is also not clear that simply because one can put sixteen megabytes of RAM on a chip, one can also put 2,048 microprocessors (one for each 4,096 16-bit words), or even 128 microprocessors (one for each 32,768 32-bit words), on that chip as well. Would these small microprocessors have to include hardware floating-point capability to be useful, or would smaller ones that can only do integer arithmetic on single bytes be appropriate?
This page discusses one possible approach to this problem.
The following diagram illustrates more of the registers belonging to the architecture described here:
It does not illustrate all of them; there are banks of registers used for long vector operations which involve eight and sixty-four sets of registers each of which is equal in size to the entire set of supplementary registers shown in the diagram, and a process may also have additional register space allocated to it for use with the bit matrix multiply instruction, to be described later, involving multiple sets of registers similar in size to the short vector registers.
Thus, in addition to the basic three sets of eight registers, there are three other groups of eight registers like the base registers used to allow shorter instructions in some formats, there is a group of sixteen 256-bit registers used to allow vector operations in most operating modes, and there are also banks of sixty four registers for longer vector operations. The availability of these larger banks of registers might be restricted to some processes in a time-sharing environment, or they might be implemented using only cache memory, slower than actual registers, in some less expensive implementations of the architecture. The short vector registers serve an additional purpose as a bit matrix multiply register in some modes.
The architecture described on this site has been modified from time to time. Recently, the ability to use the cache memory as an instrument of accessing memory, externally divided into units of 256 bits, in multiples of other word sizes, such as 24, 36, and 40 bits, to allow memory-efficient emulation of older architectures, was added.
A feature to assist in emulation by providing a method of decoding an instruction in an instruction format described to the computer, and submitting converted instructions for execution has now been defined, but this definition is in its earliest stages. As well, this feature includes a method of retaining results of conversion, in a tabular format, so that each location in the table corresponds to a location in the memory of the computer being emulated (as opposed to true just-in-time compilation) and executing instructions from the table in preference to repeated translation, where possible, Although this architecture provides a significant degree of flexibility in its data formats, the attempt has not been made to allow all possible data formats, so a feature for conversion of arithmetic operands is also included, and this feature can be used outside of emulation to assist in converting data for use by a program operating in emulation.
Although this architecture has been designed, in some respects, according to my architectural preferences (i.e., it is big-endian by default), because I have included nearly every possible feature a computer could have, in order to allow this architecture to serve as a springboard for discussing these features, it would seem that, like the DEC Alpha, but even more so, it could not compete economically with simpler architectures for any given level of performance, except, of course, in those occasional applications where one of this architecture's more exotic features, not found elsewhere, proves to be useful. I cannot, therefore, unreservedly advocate it as fit for implementation, although I certainly would like to see some sort of CISC big-endian architecture with pipeline vector instructions become available at prices, and with a complement of software, reflecting the benefits of a mass market. Even that seems unlikely to happen in the real world at this time; this architecture, whose genre is such as to challenge the makers of acronyms (Grotesquely Baroque Instruction Set Computing?) would seem far less likely to see the light of day.
But, if pressed, I might indeed step forwards to claim that it is not truly all that bad; think of it as a computer like the Cray-1, with the addition of 64 processors that can only reference the cache, and which also has the ability to have multiple concurrent processes running that, by using only a 360-sized subset of the architecture, stay out of the way of the real number-crunching, even as they, scurrying about underfoot, make their contribution to overall throughput.
Eliminate the multiplicity of data storage widths, the plethora of floating-point formats, the Extended Translate instruction (even if one real machine, the IBM 7950, better known as HARVEST, offered somewhat similar capabilities to its one customer, the NSA), the ability to select or deselect, as an option, explicit indication of parallelism and the various postfix supplementary bits, and, of course, the second program counter (even if one real machine, the Honeywell 800, and its close relatives, had this feature), and the Simple Floating data type (even if at least one real computer, the Autonetics RECOMP II, implemented its floating-point capabilities in such a manner, although I know of no real computer that offered floating-point arithmetic of this kind in addition to conventional floating-point capabilities), and the result is an architecture quite comparable in complexity with many real architectures; vector register mode looks a lot like a Cray-1, symmetric three-address mode a bit like a Digital Equipment Corporation VAX (the Flexible CISC Mode, of course, resembles that computer rather more closely), and short shift mode a bit more like an IBM System/360 than the native mode does already (General Register Mode has an even closer resemblance to the System/360).
While one might protest that it is meaningless to note that this architecture would cease to be extravagant after one removes its extravagances, what is meaningful is to, by listing them, show that they are not as many in number, and as expensive in cost, as they might be feared to be on the basis of the initial impression they can create.
One extravagance has finally been successfully eliminated from the architecture as previously conceived. Originally, in addition to a basic instruction format more closely modelled on that of the System/360 and the 68000, there were dozens of alternative instruction formats. Most of these were attempts to be more parsimonious in the allocation of opcode space, so that a larger number of instructions could be only one or two words long.
By using a scheme pioneered on the SEL 32 minicomputer of allowing memory-reference instructions to act only on aligned operands, and then making use of what would be the otherwise unused least significant bits of addresses to help distinguish between data types, by distinguishing between sets of types of different lengths of two or more bytes, it was possible, by judicious choice of which less-frequent instructions to allow to be lengthened, to achieve a set of instruction formats in which all the common operatins were as short as possible.
A further gain that was achieved was to organize the instruction formats so that the length of an instruction could be determined by considerably less logic than was needed to decode the instructions fully. Once this had been achieved, the benefits of making the resulting mode of operation the only one made it imperative to do so.
Since this was written, alternate modes of operation have made a modest return to the architecture. The alternate modes, however, have been integrated into the length-determination scheme, so that switching to an alternate mode does not significantly increase the complexity of deterimining the length of an instruction.
The original instruction format from which the current design was derived is shown here:
Inspired by the IBM 360, 16-bit address displacements instead of 12-bit ones are obtained by having eight base registers separate from eight general registers used both as accumulators and as index registers - as well as trimming the opcode field to seven bits from eight. Register to register instructions are indicated by putting zero in the base register field instead of by using a different opcode.
Since the elimination of the alternative addressing modes, I have thought of another way in which more opcode space might have been obtained. Using the System/360 Model 20 as a precedent, but with one important change, the base register field can be shifted to the address word while still allowing 15-bit displacements instead of 12-bit ones, at least part of the time:
Base register 0 points to a 32K byte region of memory, and base register 1 points to an 8K byte region of memory; the other base registers point to 4K byte regions of memory, as on the IBM 360. Thus, a likely convention would be that a program would use base registers 0 and 1 for its primary data and program areas, either respectively or the other way around as suits its need, and the other base registers to range more widely for data.
As the base register field would not have to be in the first halfword of the instruction, this would allow a generous 10-bit opcode field.
As the number of gates that can be placed on a chip can be increased, there are a number of ways in which they can be used to increase performance. The simplest is to make use of data formats that are as large as are used within applications, rather than smaller ones due to space limitations; this has already been achieved, as we advanced from the days when the only microprocessors that existed only performed integer arithmetic, and only on eight-bit bytes of data. The next is to switch from using serial arithmetic to parallel arithmetic, and then to use more advanced types of circuitry for multiplication and division to achieve higher speeds and the possibility of pipelining, and this too has been achieved. As the number of available gates increases further, the other options that are available include adding more cache memory to a chip, putting more than one CPU on a single die, so that two or more processors can work in parallel, and, as illustrated here, making the architecture more elaborate by adding features, so that a greater number of operations can be performed by a single instruction instead of a short program. Another option, which also addresses the Von Neumann bottleneck, is to put dynamic RAM, which has a considerably higher bit density than the static RAM used for a cache, on the chip. This allows an arbitrarily wide data path from memory to the processor, and, as it happens, this would work well with the architecture outlined here, which could benefit from a 4,096-bit wide path to main memory from the cache, instead of the more conventional 256-bit wide path envisaged as practical. | http://www.quadibloc.com/arch/arcint.htm | 13 |
69 | Chemical Symbol: A notation using one to three letters to represent an element.
Chemical Formula: The notation using symbols and numerals to represent the composition of substances.
Molecule: A neutral group of atoms held together by covalent bonds.
Binary Compound: A compound composed of only two elements.
Molecular Formula: A formula indicating the actual number of atoms of each element making up a molecule.
Empirical Formula: The formula giving the simplest ratio between the atoms of the elements present in the compound.
Formula Unit: The amount of a substance represented by its formula.
Chemical Bond: is a strong attractive force between atoms or ions in a compound.
Cation: Atom loses one (or more) electrons. Cations are positive in charge. (Metals = Cations)
Anion: Atom gains one (or more) electrons. Anions are negative in charge. (Nonmetals = Anions)
Octet Rule: Each atom would then have 8 ( an octet of ) electrons in its valence orbitals.
Electronegativity: The relative attraction of an atom for a shared pair of electrons.
Ionic Bond: The electrostatic attraction between ions of opposite charge.
Nonpolar Covalent Bond: A bond characterized by the equal sharing of a pair of electrons between atoms.
Polar Covalent Bond: A bond characterized by the unequal sharing of a pair of electrons between atoms.
The list of formulas (above) are examples of elements that exist in nature as ___________________ molecules.
RULES FOR WRITING FORMULAS:
The first two rules for writing formulas are:
Rule 1: Represent each kind of element in a compound with the correct symbol for that element.
Rule 2: Use subscripts to indicate the number of atoms of each element in the compound. If there is only one atom of a particular element, no subscript is used.
Applying these rules for a molecule of one atom of oxygen, O, and two atoms of hydrogen, H, the formula could be written OH2. To avoid confusion, the symbols are written in a particular order.
Rule 3: Write the symbol for the MORE metallic element first.
Neither hydrogen nor oxygen is a metal. However, the location of hydrogen with the metallic elements in the periodic table suggests that is should appear before oxygen in the formula.
Similarly, for a compound containing oxygen and sulfur, the location of sulfur is below oxygen in the periodic table suggests that it is more metallic than oxygen and should be written first in the formula.
For example, SO2 is a compound formed when sulfur burns.
|Sulfur is the more metallic element, and is written first. No subscript is used, since there is only one atom in the molecule.||
||Oxygen is less metallic than sulfur; its symbol is written last.|
Using these three rules, you can write the correct formula for almost any compound if you know the elements it contains and the number of atoms of each element in one molecule (or formula unit) of that compound. However, the only way chemists can get that information is by experimental analysis.
Formulas are a convenient way to represent compounds, but compounds have names as well. There are Millions of compounds, so it is IMPOSSIBLE to memorize all their names (or Function).
Learning the rules for naming compounds will help you figure out their names.
Unfortunately many common compounds were named before it became obvious that a systematic method would be needed. H2O is called water be everyone (including chemists), even though its systematic name is dihydrogen monoxide.
NAMING BINARY MOLECULAR COMPOUNDS: ( NON-METAL with NON-METAL )
The first system is used to name binary compounds that exist as ___________________ molecules rather than as ionic compounds.
Binary compounds are compounds made up of ___________________ elements.
There is ___________________ to look at the formula of a compound and know whether it is molecular or ionic. However, binary compounds containing two nonmetals are ALWAYS molecular, whereas binary compounds containing a metal and a nonmetal are usually ionic.
If you are not sure whether a compound is ionic or molecular, look to see if it contains a ___________________ . If it does not, use the following system to name it.
The name tells what elements are in the molecule and includes Greek prefixes to indicate the number of atoms of each element present. The system involves these steps:
1. Name the elements in the same order that they appear in the formula.
2. Drop the last syllable (two syllables in some cases) in the name of the final element and add -ide.
3. Add ___________________ to the name of each element to indicate the number of atoms of that element in the molecule.
* * * In practice, the mono- prefix is omitted for the first element in the name.
Below describes the steps that are used to name CO2. Notice how each step in the naming process reduces ambiguity about what compound is being named.
The first step clarifies what elements are involved, but the resulting name does not make clear that the elements are in a compound.
Changing the ending of the second element, in step two, signals that you are talking about a ___________________ rather than isolated elements.
Step three adds prefixes to indicate the number of atoms of each element present.
Names of elements are written in the order they appear in the formula.
Drop the last syllable (or last two) in the second element and add -ide.
the "-ygen" in oxygen is dropped and "-ide" is substituted.
Add the correct prefix to each element.
Mono- is added to carbon and di- is added to oxide to indicate the number of carbon and oxygen atoms in the molecule
Mono- is dropped from carbon since we DO NOT place the prefix mono- on the first name. All other prefixes WILL BE placed on the first element in the compound name.
Example: What is the name of the compound N2O4?
1) Practice Problems: Name the following compounds.
2) Write the formula of the following compounds.
a) chlorine di-oxide
b) di-chlorine mono-oxide
c) tri-phosphorus tetra-oxide
d) tetra-sulfur di-nitride
e) iodine hepta-fluoride
f) tetra-arsenic octa-oxide
g) penta-silicon hepta-oxide
h) tri-carbon hexa-chloride
i) phosphorous mono-chloride
HOMEWORK - NAMING COMPOUNDS - PART I: (BINARY COMPOUNDS)
Chemical Bond: is a strong __________________________ between atoms or ions in a compound.
Factors that affect bonding is: 1) The __________________________ nature of bonding. and 2) The __________________________ to other atoms.
There are millions of stable compounds formed from fewer than 100 elements.
In compounds formed from representative elements, the atoms have acquired an electron configuration that is __________________________ with that of a noble gas element.
For Representative Elements: the electrons in the "s" and "p" orbitals.
- Full outer shell = 8 electrons
- Empty outer shell = 0 electrons
The Law of Definite Composition: That the proportion of elements in a given compound is __________________________. - To know the proportion of a compound can only be determined from the lab.
Molecular arrangement: The __________________________ shows how elements are arranged - Also, the structure helps predict proopertties.
SHAPE OF A MOLECULE AND WHY IT IS IMPORTANT:
Shape is another aspect of structure that influences __________________________.
Shape is crucial in determining whether a reaction will (or will not) occur - Example: Biological Proteins or Enzymes.
Strength of a bond is due to its BOND ENERGY. Bond energy is the energy involved in the process of a bond forming and breaking.
CATION: Atom loses one (or more) electrons. Cations are __________________________ in charge. (Metals = Cations)
ANION: Atom gains one (or more) electrons. Anions are __________________________ in charge. (Nonmetals = Anions)
The electrostatic attraction is the mechanism for ionic bonds. Electrostatic attraction is due to opposite charges which are attracted to each other.
Ionic Bond: A chemical bond by the __________________________ between a cation and anion.
PROPERTIES OF IONIC COMPOUNDS:
- Liquid state of an ion __________________________ electricity.
- Mobile charged particles are necessary in order for a substance to conduct an electrical current.
- At room temperature, crystals of ionic compounds exist as regular, three-dimensional arrangements of cations and anions.
CRYSTAL LATTICES: The __________________________ arrangement of atoms.
Forming Ionic Compounds: 1) React a metal with a non-metal. 2) The metal must transfer one (or more) electrons to the non-metal.
Covalent bonding occurs when two or more nonmetals __________________________ electrons - attempting to obtain a stable octet of electrons at least part of the time.
H2, Cl2, O2, N2, F2 ... etc, Exist as a diatomic molecule, but when each of these gases change their physical state to the liquid state - each DOES NOT conduct electricity. Therefore, each DOES NOT contain ions.
Even though the Hydrogen molecule (for example), H2, does not contain ions, each atom of hydrogen is still made up of charged particles (protons and electrons).
When two Hydrogen atoms get close , the attraction between electrons and protons occur.
The electrons are then shared.
The energy needed to break a bond is the Bond Energy.
The distance between the __________________________ is referred to as the bond length.
COVALENT BONDS and LEWIS DOT SYMBOLS:
Electron dot symbols are also used for representing covalent bonds. The formation of a molecule of hydrogen can be illustrated as:
Just as in the case of hydrogen, when two chlorine atoms approach, the unpaired electrons are shared and a covalent bond is formed. The formation of molecular chlorine can be illustrated as:
UNEQUAL SHARING OF ELECTRONS:
Could a covalent bond form between a hydrogen atom and a chlorine atom? If you draw the Lewis Dot Symbol for each atom, you will find that they both have an __________________________. IF THE UNPAIRED ELECTRONS are shared by both the hydrogen and chlorine nucleus, a Covalent Bond IS FORMED.
- If both nuclei are identical ( same number of protons ), the electrons will be shared __________________________.
- If one nucleus has a stronger attraction for the electrons than the other nucleus, the likelihood is that the shared electrons will be __________________________ to the stronger nucleus.
POLAR COVALENT BONDS:
Even though electrons are shared, the fact that they are more strongly attracted to the chlorine atom results in a partial negative charge at the chlorine atom and a partial positive charge at the hydrogen atom.
Such a bond is called a __________________________ . A dipole has two separated, equal but opposite charges.
A covalent bond that has a dipole is called a __________________________ .
When two different elements (therefore, an unequal sharing of electrons) form a covalent bond, the bond is usually a Polar Covalent Bond, as in HCl.
- A partial negative charge is represented with the lower case Greek letter delta with a negative sign
- A partial positive charge is represented with the lower case Greek letter delta with a positive sign
Covalent bonds in which electrons are __________________________ by two nuclei, as in H2 or Cl2, are called Nonpolar Covalent Bonds.
SUMMARY OF THE BOND TYPES:
NONPOLAR COVALENT BOND: Equal sharing of electrons, because the nuclear attraction for the electron pair is EQUAL.
POLAR COVALENT BOND: Unequal sharing of electrons, because one atom has a greater nuclear attraction for the electron pair than the other atom.
IONIC BOND: The metal donates its electrons to obtain a positive charge and the nonmetal accepts the electrons from the metal to obtain a negative charge.
ELECTRONEGATIVITY is the measure of the __________________________ an atom has for a shared pair of electrons in a bond.
The above example shows Hydrogen and Chlorine bonding to form a polar covalent bond.
Chlorine has a greater attraction for the electron pair than hydrogen. Therefore, chlorine is said to be __________________________ electronegative than hydrogen.
In general, the Bonding Electrons will be __________________________ to the atom that has a higher electronegativity.
Francium has the LOWEST electronegativity.
Fluorine has the HIGHEST electronegativity.
PREDICTING THE "TYPES" OF BONDS:
__________________________ in Electronegativities is used as a guide to determine the degree of electron sharing in a bond.
- As the electronegativity difference between the atoms increases, the degree of sharing decreases.
- If the difference in electronegativities is 1.7 or more, the bond is GENERALLY considered more IONIC than COVALENT.
- If the electronegativity difference is between 0.1 and 1.7, the bond is a POLAR COVALENT bond that is GENERALLY considered more covalent than ionic.
- If the electronegativity difference is ZERO, the bond is considered to be a NONPOLAR COVALENT bond.
Classify the bond in each of the following as ionic, polar covalent, or nonpolar covalent for: KF, O2, and ICl. (show the partial charge for any polar covalent bonds.)
KF . . . . . . . Electronegativity for K = 0.8, and F = 4.0; ( 4.0 - 0.8 = 3.2 ) - [ ionic bond ]
O2 . . . . . . . Electronegativity for O = 3.5; ( 3.5 - 3.5 = 0 ) - [ nonpolar covalent bond ]
ICl . . . . . . Electronegativity for I = 2.5, and Cl = 3.0; ( 3.0 - 2.5 = 0.5 ) - [ polar covalent bond ];
3) PRACTICE PROBLEMS: Identify the type of bond in the following substances: ( show the partial charge for any polar covalent bonds. )
GO TO "PREDICTING BOND TYPES" WORKSHEET
ELECTRON DOT CONFIGURATION (LEWIS DOT SYMBOLS):
1916, Lewis: Developed a system of arranging dots (representing valence electrons) around the symbols of the elements.
- The Symbol represents the Nucleus and Core Electrons for that Element.
- Dots represent the __________________________ .
Example: Write the electron dot symbol for phosphorus.
Answer: P = Phosphorus, Group 5A, therefore, there are 5 valence electrons.
4) PRACTICE PROBLEMS: Draw the Lewis Dot Symbol for the Given.
Example Problem: Use Electron Dot Symbols to represent the formation of Magnesium Fluoride from atoms of Mg and F.
Mg is in Group 2A, ( Forms a Cation, Mg+2 )
F is in Group 7A, ( Forms an Anion, F-1 )
Hopefully, you noticed the brackets in the products. The brackets are used for IONS ONLY - representing the "new" charge on the atom because of the gain or loss of electrons.
5) PRACTICE PROBLEMS: Draw the Lewis Dot Symbol for the Given.
a. sodium oxide
b. magnesium chloride
THE OCTET RULE:
Predicting the bonding arrangement that occurs between atoms in a molecule is based on two important observations.
- The first fact, is that noble gases are unreactive and form very few compounds.
The reason noble gases do not generally react is because the outermost "s" and "p" orbitals are __________________________ , making them particularly stable.
- The second fact, is that ionic compounds of the representative elements are generally made up of anions and cations that have noble gas configurations.
From observations like these, chemists have formulated the OCTET RULE. The octet rule is based on the assumption that atoms form bonds to achieve a noble gas configuration. Each atom would then have 8 ( an octet of ) electrons in its valence orbitals.
LEWIS DOT SYMBOLS FOR MOLECULES:
You can use the octet rule to write the Lewis Dot symbols for molecules. To do this, you need to know the following information:
1) How many of each kind of atom are in the molecule? Determine this from the chemical formula.
2) How many valence electrons are available? Determine this from looking at which "Group" each element is within - example: sulfur is in group 6A, therefore there are 6 valence electrons.
3) What is the skeleton structure? The skeleton structure shows which atoms are bonded to each other. The skeleton structure can be proven experimentally. GENERALLY, the __________________________ electronegative element is located as the central atom - so another GENERAL rule is that if you have ONE atom type make it your central atom.
4) Where do the "dots" go in the structure? Place the dots around the __________________________ atoms first so that each atom has 8 electrons - an octet structure. THEN place the remaining dots around (on) the central atom.
EXAMPLE: Write the Lewis Dot Symbol for H2O, NH3, and CH4.
|Number of each kind of atom in molecule||
|Valence electrons for each atom||
|Total number of valence electrons||
|Arrangement of dots||
6) Practice Problems: Draw the Lewis Dot Symbols for:
DOUBLE AND TRIPLE BONDS:
Try to draw the Lewis Dot symbol for carbon dioxide, CO2. When you use the method described above - you will encounter a problem. Carbon has ffour valence electrons, and the two oxygen together have 12 ( 2 x 6 ), for a total of 16 electrons. The two possible structures that can be drawn for carbon dioxide can be these:
In either case, carbon or oxygen will not have eight electrons represented. However, if each oxygen atoms shares two pairs of electrons with the carbon atom, double bonds would be formed, and octets around both carbon and oxygen can be achieved.
The circles are drawn to represent the octet for each atom.
A Double Bond is a covalent bond in which four electrons (two pairs) are shared by the bonding atoms.
A Triple Bond is a covalent bond in which two atoms share three pairs of electrons. Nitrogen gas is an example of a triple bond.
Draw the Lewis Dot Symbol for HCN.
7) PRACTICE PROBLEMS: Draw the Lewis Dot Symbol for:
LEWIS DOT SYMBOLS FOR POLYATOMIC IONS:
There are a large number of ionic compounds made of more than two elements. In these compounds, at least one of the ions consists of two or more atoms which are polar covalently bonded. However, the particle as a whole possesses an overall charge.
For example, sulfate, SO4-2,
Each oxygen has a polar covalent bond to the sulfur - because the oxygen and sulfur atoms only have 6 electrons in their outer orbitals, all the atoms CANNOT have 8 electrons in their respective outer orbital at any one point in time ( where the "X" is located on the two oxygen representing the missing electron for the octet. )
Since there are two ( total ) missing electrons for this polyatomic ion - when the sulfate ion reacts with a metal(s), the electrons will occupy the __________________________ giving the polyatomic ion its charge. Theoretically, the above polyatomic ion does not have a charge - yet (until the reaction occurs!)
Also, for a positive polyatomic ion, the positive charge is designated because the ion will be __________________________ electrons.
8) PRACTICE PROBLEMS: Write the Lewis Dot symbol for:
EQUIVALENT LEWIS DOT SYMBOLS (RESONANCE STRUCTURES):
Some molecules and polyatomic ions have properties that cannot be adequately explained by a single Lewis Dot symbol. An example is the carbonate ion, CO3-2. One Lewis Dot Symbol that fulfills the octet rule is:
However, the double bond could be on any oxygen atom (not just the oxygen atom that is pictured). Therefore, there are three possible Lewis dot symbols possible for carbonate.
However, experimental studies show that all three of the carbon-oxygen bonds are identical; there is no evidence of both single and double bonds. In fact, the bonds are stronger than a carbon-oxygen single bond and weaker than a carbon-oxygen double bond. This phenomenon is called RESONANCE.
In cases where resonance occurs, more than one acceptable Lewis dot symbol can be written without changing the arrangement of atoms.
Resonance is often represented by writing each of the different Lewis Dot Symbols and including double-headed arrows between the possible symbols.
GO TO LEWIS DOT SYMBOL WORKSHEET:
LIMITATIONS OF THE OCTET RULE:
While the octet rule is a useful model that allows you to picture the structure of molecules, it is important to realize that all MOLECULES do not obey the octet rule. The concept serves only as a rule of thumb.
MOLECULES WITH MORE THAN AN OCTET:
Compounds also exist in which the central atom has more than an octet of electrons. All of the compounds formed from the noble gas elements (Argon on down) are examples.
A very important concept to remember: ONLY Carbon, Nitrogen, Oxygen, and Fluorine, MUST have an octet !
The existence of compounds of noble gas elements was thought to be impossible - because the noble gas atoms already have complete octets. One of the first noble gas compounds to be synthesized was xenon tetrafluoride, XeF4. The electron dot structure for XeF4 has twelve electrons in the valence orbitals of xenon.
- PREDICTING THE SHAPES OF MOLECULES:
There is no direct relationship between the formula of a compound and the shape of its molecules. The shapes of these molecules can be predicted from their Lewis structures, however, with a model developed about 30 years ago, known as the valence-shell electron-pair repulsion (VSEPR) theory.
The VSEPR theory assumes that each atom in a molecule will achieve a geometry that minimizes the repulsion between electrons in the valence shell of that atom. The five compounds shown below can be used to demonstrate how the VSEPR theory can be applied to simple molecules.
- LINEAR MOLECULES:
There are only two places in the valence shell of the central atom in CO2 where electrons can be found. Repulsion between these pairs of electrons can be minimized by arranging them so that they point in opposite directions. Thus, the VSEPR theory predicts that CO2 should be a linear molecule, with a 1800 angle between the two C - O double bonds.
- TRIGONAL PLANAR MOLECULES:
There are three places on the central atom in boron trifluoride (BF3) where valence electrons can be found. Repulsion between these electrons can be minimized by arranging them toward the corners of the equilateral triangle. The VSEPR theory, therefore, predicts a trigonal planar geometry for the BF3 molecule, with a F - B - F bond angle of 1200. Also, it is important to note that boron is VERY HAPPY with only six electrons and not eight. This is a "tricky" element that is overlooked easily.
- TETRAHEDRAL MOLECULES:
CO2 and BeF3 are both two-dimensional molecules, in which the atoms lie in the same plane. If we place the same restriction on methane (CH4), we should get a square-planer geometry in which the H - C - H bond angle is 900. If we let this system expand into three dimensions - we end up with a tetrahedral molecule in which the H - C - H bond is approximately 1090.
- TRIGONAL BIPYRAMID MOLECULES:
Repulsion between the five pairs of valence electrons on the phosphorus atom PF5 can be minimized by distributing these electrons toward the corners of a trigonal bipyramid. Three of the positions in a trigonal bipyramid are labeled equatorial because they lie along the equator of the molecule. The other two are axial because the they lie along an axis perpendicular to the equatorial plane. The angle between the three equatorial positions is 1200, while the angle between an axial and an equatorial position is 900.
- OCTAHEDRON MOLECULES:
There are six places on the central atom in SF6 where valence electrons can be found. The repulsion between these electrons can be minimized by distributing them toward the corners of an octahedron. The term octahedron literally means "eight sides," but it is the six corners, or vertices, that interest us. To imagine the geometry of an SF6 molecule, locate fluorine atoms on opposite sides of the sulfur atom along the X, Y, and Z axes of an XYZ coordinate system.
The valence electrons on the central atom in both NH3 and H2O should be distributed toward the corners of a tetrahedron, as shown in the figure below. Our goal, however, is not predicting the distribution of valence electrons. It is to use this distribution of electrons to predict the shape of the molecule. Until now, the two have been the same. Once we include nonbonding electrons, that is no longer true.
The VSEPR theory predicts that the valence electrons on the central atoms in ammonia and water will point toward the corners of a tetrahedron. Because we cannot locate the nonbonding electrons with any precision, this prediction cannot be tested directly. But the results of the VSEPR theory can be used to predict the positions of the nuclei in these molecules, which can be tested experimentally. If we focus on the positions of the nuclei in ammonia, we predict that the NH3 molecule should have a shape best described as trigonal pyramidal, with the nitrogen at the top of the pyramid. Water, on the other hand, should have a shape that can be best descried as bent, or angular. Both of these predictions have been shown to be correct, which reinforces our faith in the VSEPR theory.
Predict the shape of the following molecules.
Draw the Lewis Dot Structure and then name the shape of the following molecules, :
- INCORPORATING DOUBLE AND TRIPLE BONDS
Compounds that contain double and triple bonds raise an important point: The geometry around an atom is determined by the number of places in the valence shell of an atom where electrons can be found, not the number of pairs of valence electrons. Consider the Lewis structures of carbon dioxide (CO2) and carbonate (CO32-) ion, for example.
There are four pairs of bonding electrons on the carbon atom in CO2, but only two places where these electrons can be found. (There are electrons in the C=O double bond on the left and electrons in the double bond on the right.) The force of repulsion between these electrons is minimized when the two C=O double bonds are placed on opposite sides of the carbon atom . The VSEPR theory, therefore, predicts that CO2 will be a linear molecule, just like BeF2, with a bond angle of 1800.
The Lewis structure of the carbonate ion also suggests a total of four pairs of valence electrons on the central atom. But these electrons are concentrated in three places: The two C - O single bonds and the C=O double bond. Repulsion between these electrons are minimized when the three oxygen atoms are arranged toward the corners of an equilateral triangle. The CO3-2 ion should therefore have a trigonal-planer geometry, just like BF3, with a 1200 bond angle.
Bond polarities (Polar Bonds) arise from bonds between atoms of different electronegativity. When more complex molecules are considered we must consider the possiblility of molecular polarities that arise from the sums of all of the individual bond polarities.
To do full justice of molecular polarity - one must consider the concept of vectors (mathematical quantities that have both direction and magnitude).
Let's begin by thinking of a polar bond as a VECTOR pointed from the positively charged atom to the negatively charged atom. The size of the vector is proportional to the difference in electronegativity of the two atoms.
If the two atoms are identical, the magnitude ofthe vector is ZERO, and the molecule has a nonpolar bond.
Let's consider molecules with three atoms. We can establish from the Lewis dot symbols and VSEPR that CO2 is a linear molecule. Each of the C - O bonds will have a vector arrow pointing from the carbon to the oxygen. The two vectors should be identical and pointed in exactly opposite directions.
The sum of these two vectors must be zero because the vectors must cancel one another out. Even though the C - O bonds must be polar, the CO2 MOLECULE is NONPOLAR.
HCN, hydrogen cyanide, is linear. Since carbon is more electronegative than hydrogen one would expect a vector pointing from H to C. In addition, nitrogen is more electronegative than C so one should expect a bond vector pointing from C to N. The H-C and C-N vectors add to give a total vector pointing from the H to the N.
HCN is a POLAR MOLECULE with the vector moving from the hydrogen to the nitrogen - making the hydrogen end somewhat positive and the nitrogen end is somewhat negative.
In contrast, let's examine the case of SO2. We know from the Lewis dot symbol and from VSEPR that this molecule is "bent." Its overall geometry would be considered to be trigonal planar if we considered the lone pair electrons on the Sulfur.
Lone pair electrons are NOT considered when we examine polarity since they have already been taken into account in the electronegativity.
We would predict that there should be polarity vectors pointing from the sulfur to the two oxygens. Since the molecule is bent the vectors will NOT cancel out. Instead they should be added together to give a combined vector that bisects the O-S-O angle and points from the S to a point in-between the two oxygens.
GO TO VSEPR WORKSHEET: | http://www.avon-chemistry.com/chem_bonding_lecture.html | 13 |
76 | During the 1940s, investigators in the United States and Hungary bounced radar waves off the Moon for the first time, while others made the first systematic radar studies of meteors. These experiments constituted the initial exploration of the solar system with radar. In order to understand the beginnings of radar astronomy, we first must examine the origins of radar in radio, the decisive role of ionospheric research, and the rapid development of radar technology triggered by World War II.
As early as 20 June 1922, in an address to a joint meeting of the Institute of Electrical Engineers and the Institute of Radio Engineers in New York, the radio pioneer Guglielmo Marconi suggested using radio waves to detect ships:1
By the time Germany invaded Poland in September 1939 and World War II was underway, radio detection, location, and ranging technologies and techniques were available in Japan, France, Italy, Germany, England, Hungary, Russia, Holland, Canada, and the United States. Radar was not so much an invention, springing from the laboratory bench to the factory floor, but an ongoing adaptation and refinement of radio technology. The apparent emergence of radar in Japan, Europe, and North America more or less at the same time was less a case of simultaneous invention than a consequence of the global nature of radio research.2
Although radar is identified overwhelmingly with World War II, historian Sean S. Swords has argued that the rise of high-performance and long-range aircraft in the late 1930s would have promoted the design of advanced radio navigational aids, including radar, even without a war.3 More decisively, however, ionospheric research propelled radar development in the 1920s and 1930s. As historian Henry Guerlac has pointed out, "Radar was developed by men who were familiar with the ionospheric work. It was a relatively straightforward adaptation for military purposes of a widely-known scientific technique, which explains why this adaptation--the development of radar--took place simultaneously in several different countries."4
The prominence of ionospheric research in the history of radar and later of radar astronomy cannot be ignored. Out of ionospheric research came the essential technology for the beginnings of military radar in Britain, as well as its first radar researchers and research institutions. After the war, as we shall see, ionospheric research also drove the emergence of radar astronomy.
Despite its scientific origins, radar made its mark and was baptized during World War II as an integral and necessary instrument of offensive and defensive warfare. Located on land, at sea, and in the air, radars detected enemy targets and determined their position and range for artillery and aircraft in direct enemy encounters on the battlefield. Other radars identified aircraft to ground bases as friend or foe, while others provided navigational assistance and coastal defense. World War II was the first electronic war, and radar was its prime agent.5
In 1940, nowhere did radar research achieve the same advanced state as in Britain. The British lead initially resulted from a decision to design and build a radar system for coastal defense, while subsequent research led to the invention of the cavity magnetron, which placed Britain in the forefront of microwave radar. The impetus to achieve that lead in radar came from a realization that the island nation was no longer safe from enemy invasion.
For centuries, Britain's insularity and navy protected it from invasion. The advent of long-range airplanes that routinely outperformed their wooden predecessors spelled the end of that protection. Existing aircraft warning methods were ineffectual. That Britain was virtually defenseless against an air assault became clear during the summer air exercises of 1934. In simulated night attacks on London and Coventry, both the Air Ministry and the Houses of Parliament were successfully "destroyed," while few "enemy" bombers were intercepted.6
International politics also had reached a critical point. The Geneva Disarmament Conference had collapsed, and Germany was rearming in defiance of the Treaty of Versailles. Under attack from Winston Churchill and the Tory opposition, the British government abandoned its disarmament policy and initiated a five-year expansion of the Royal Air Force. Simultaneously, the Air Ministry Director of Scientific Research, Henry Egerton Wimperis, created a committee to study air defense methods.
Just before the Committee for the Scientific Survey of Air Defence first met on 28 January 1935, Wimperis contacted fellow Radio Research Board member Robert (later Sir) Watson-Watt. Watson-Watt, who oversaw the Radio Research Station at Slough, was a scientist with twenty years of experience as a government researcher. Ionospheric research had been a principal component of Radio Research Station studies, and Watson-Watt fostered the development there of a pulse-height technique.7
The pulse-height technique was to send short pulses of radio energy toward the ionosphere and to measure the time taken for them to return to Earth. The elapsed travel time of the radio waves gave the apparent height of the ionosphere. Merle A. Tuve, then of Johns Hopkins University, and Gregory Breit of the Carnegie Institution's Department of Terrestrial Magnetism in Washington, first developed the technique in the 1920s and undertook ionospheric research in collaboration with the Naval Research Laboratory and the Radio Corporation of America.8
In response to the wartime situation, Wimperis asked Watson-Watt to determine the practicality of using radio waves as a "death ray." Rather than address the proposed "death ray," Watson-Watt's memorandum reply drew upon his experience in ionospheric research. Years later, Watson-Watt contended, "I regard this Memorandum on the 'Detection and Location of Aircraft by Radio Methods' as marking the birth of radar and as being in fact the invention of radar." Biographer Ronald William Clark has termed the memorandum "the political birth of radar."9 Nonetheless, Watson-Watt's memorandum was really less an invention than a proposal for a new radar application.
The memorandum outlined how a radar system could be put together and made to detect and locate enemy aircraft. The model for that radar system was the same pulse-height technique Watson-Watt had used at Slough. Prior to the memorandum in its final form going before the Committee, Wimperis had arranged for a test of Watson-Watt's idea that airplanes could reflect significant amounts of radio energy, using a BBC transmitter at Daventry. "Thus was the constricting 'red tape' of official niceties slashed by Harry Wimperis, before the Committee for the Scientific Survey of Air Defence had so much as met," Watson-Watt later recounted. The success of the Daventry test shortly led to the authorization of funding (£12,300 for the first year) and the creation of a small research and development project at Orford Ness and Bawdsey Manor that drew upon the expertise of the Slough Radio Research Station.
From then onwards, guided largely by Robert Watson-Watt, the foundation of the British radar effort, the early warning Chain Home, materialized. The Chain Home began in December 1935, with Treasury approval for a set of five stations to patrol the air approaches to the Thames estuary. Before the end of 1936, and long before the first test of the Thames stations in the autumn of 1937, plans were made to expand it into a network of nineteen stations along the entire east coast; later, an additional six stations were built to cover the south coast.
The Chain Home played a crucial role in the Battle of Britain, which began in July 1940. The final turning point was on 15 September, when the Luftwaffe suffered a record number of planes lost in a single day. Never again did Germany attempt a massive daylight raid over Britain. However, if radar won the day, it lost the night. Nighttime air raids showed a desperate need for radar improvements.
In order to wage combat at night, fighters needed the equivalent of night vision--their own on-board radar, but the prevailing technology was inadequate. Radars operating at low wavelengths, around 1.5 meters (200 MHz), cast a beam that radiated both straight ahead and downwards. The radio energy reflected from the Earth was so much greater than that of the enemy aircraft echoes that the echoes were lost at distances greater than the altitude of the aircraft. At low altitudes, such as those used in bombing raids or in air-to-air combat, the lack of radar vision was grave. Microwave radars, operating at wavelengths of a few centimeters, could cast a narrower beam and provide enough resolution to locate enemy aircraft.10
Although several countries had been ahead of Britain in microwave radar technology before the war began, Britain leaped ahead in February 1940, with the invention of the cavity magnetron by Henry A. H. Boot and John T. Randall at the University of Birmingham.11 Klystrons were large vacuum tubes used to generate microwave power, but they did not operate adequately at microwave frequencies. The time required for electrons to flow through a klystron was too long to keep up with the frequency of the external oscillating circuit. The cavity magnetron resolved that problem and made possible the microwave radars of World War II. As Sean Swords asserted, "The emergence of the resonant-cavity magnetron was a turning point in radar history." 12 The cavity magnetron launched a line of microwave research and development that has persisted to this day.
The cavity magnetron had no technological equivalent in the United States, when the Tizard Mission arrived in late 1940 with one of the first ten magnetrons constructed. The Tizard Mission, known formally as the British Technical and Scientific Mission, had been arranged at the highest levels of government to exchange technical information between Britain and the United States. Its head and organizer, Henry Tizard, was a prominent physics professor and a former member of the committee that had approved Watson-Watt's radar project. As James P. Baxter wrote just after the war's end with a heavy handful of hyperbole, though not without some truth: "When the members of the Tizard Mission brought one [magnetron] to America in 1940, they carried the most valuable cargo ever brought to our shores. It sparked the whole development of microwave radar and constituted the most important item in reverse Lease-Lend." 13
In late September 1940, Dr. Edward G. Bowen, the radar scientist on the Tizard Mission, showed a magnetron to members of the National Defense Research Committee (NDRC), which President Roosevelt had just created on 27 June 1940. One of the first acts of the NDRC, which later became the Office of Scientific Research and Development, was to establish a Microwave Committee, whose stated purpose was "to organize and consolidate research, invention, and development as to obtain the most effective military application of microwaves in the minimum time." 14
A few weeks after the magnetron demonstration, the NDRC decided to create the Radiation Laboratory at MIT. While the MIT Radiation Laboratory accounted for nearly 80 percent of the NDRC Microwave Division's contracts, an additional 136 contracts for radar research, development, and prototype work were let out to 16 colleges and universities, two private research institutions, and the major radio industrial concerns, with Western Electric taking the largest share. The MIT Radiation Laboratory personnel skyrocketed from thirty physicists, three guards, two stock clerks, and a secretary for the first year to a peak employment level of 3,897 (1,189 of whom were staff) on 1 August 1945. The most far-reaching early achievement, accomplished in the spring of 1941, was the creation of a new generation of radar equipment based on a magnetron operating at 3 cm. Experimental work in the one cm range led to numerous improvements in radars at 10 and 3 cm.15
Meanwhile, research and development of radars of longer wavelengths were carried out by the Navy and the Army Signal Corps, both of which had had active ongoing radar programs since the 1930s. The Navy started its research program at the Naval Research Laboratory (NRL) before that of the Signal Corps, but radar experimenters after the war used Signal Corps equipment, especially the SCR-270, mainly because of its wide availability. A mobile SCR-270, placed on Oahu as part of the Army's Aircraft Warning System, spotted incoming Japanese airplanes nearly 50 minutes before they bombed United States installations at Pearl Harbor on 7 December 1941. The warning was ignored, because an officer mistook the radar echoes for an expected flight of B-17s.16
Historians view the large-scale collection of technical and financial resources and manpower at the MIT Radiation Laboratory engaged in a concerted effort to research and develop new radar components and systems, along with the Manhattan Project, as signalling the emergence of Big Science. Ultimately, from out of the concentration of personnel, expertise, materiel, and financial resources at the successor of the Radiation Laboratory, Lincoln Laboratory, arose the first attempts to detect the planet Venus with radar. The Radiation Laboratory Big Science venture, however, did not contribute immediately to the rise of radar astronomy.
The radar and digital technology used in those attempts on Venus was not available at the end of World War II, when the first lunar and meteor radar experiments were conducted. Moreover, the microwave radars that issued from Radiation Laboratory research were far too weak for planetary or lunar work and operated at frequencies too high to be useful in meteor studies. Outside the Radiation Laboratory, though, U.S. Army Signal Corps and Navy researchers had created radars, like the SCR-270, that were more powerful and operated at lower frequencies, in research and development programs that were less concentrated and conducted on a smaller scale than the Radiation Laboratory effort.
Wartime production created an incredible excess of such radar equipment. The end of fighting turned it into war surplus to be auctioned off, given away, or buried as waste. World War II also begot a large pool of scientists and engineers with radar expertise who sought peacetime scientific and technical careers at war's end. That pool of expertise, when combined with the cornucopia of high-power, low-frequency radar equipment and a pinch of curiosity, gave rise to radar astronomy.
A catalyst crucial to that rise was ionospheric research. In the decade and a half following World War II, ionospheric research underwent the kind of swift growth that is typical of Big Science. The ionospheric journal literature doubled every 2.9 years from 1926 to 1938, before stagnating during the war; but between 1947 and 1960, the literature doubled every 5.8 years, a rate several times faster than the growth rate of scientific literature as a whole.17 Interest in ionospheric phenomena, as expressed in the rapidly growing research literature, motivated many of the first radar astronomy experiments undertaken on targets beyond the Earth's atmosphere.
Typical was the first successful radar experiment aimed at the Moon. That experiment was performed with Signal Corps equipment at the Corps' Evans Signal Laboratory, near Belmar, New Jersey, under the direction of John H. DeWitt, Jr., Laboratory Director. DeWitt was born in Nashville and attended Vanderbilt University Engineering School for two years. Vanderbilt did not offer a program in electrical engineering, so DeWitt dropped out in order to satisfy his interest in broadcasting and amateur radio. After building Nashville's first broadcasting station, in 1929 DeWitt joined the Bell Telephone Laboratories technical staff in New York City, where he designed radio broadcasting transmitters. He returned to Nashville in 1932 to become Chief Engineer of radio station WSM. Intrigued by Karl Jansky's discovery of "cosmic noise," DeWitt built a radio telescope and searched for radio signals from the Milky Way.
In 1940, DeWitt attempted to bounce radio signals off the Moon in order to study the Earth's atmosphere. He wrote in his notebook: "It occurred to me that it might be possible to reflect ultrashort waves from the moon. If this could be done it would open up wide possibilities for the study of the upper atmosphere. So far as I know no one has ever sent waves off the earth and measured their return through the entire atmosphere of the earth."18
On the night of 20 May 1940, using the receiver and 80-watt transmitter configured for radio station WSM, DeWitt tried to reflect 138-MHz (2-meter) radio waves off the Moon, but he failed because of insufficient receiver sensitivity. After joining the staff of Bell Telephone Laboratories in Whippany, New Jersey, in 1942, where he worked exclusively on the design of a radar antenna for the Navy, DeWitt was commissioned in the Signal Corps and was assigned to serve as Executive Officer, later as Director, of Evans Signal Laboratory.
On 10 August 1945, the day after the United States unleashed a second atomic bomb on Japan, military hostilities between the two countries ceased. DeWitt was not demobilized immediately, and he began to plan his pet project, the reflection of radio waves off the Moon. He dubbed the scheme Project Diana after the Roman mythological goddess of the Moon, partly because "the Greek [sic] mythology books said that she had never been cracked."
In September 1945, DeWitt assembled his team: Dr. Harold D. Webb, Herbert P. Kauffman, E. King Stodola, and Jack Mofenson. Dr. Walter S. McAfee, in the Laboratory's Theoretical Studies Group, calculated the reflectivity coefficient of the Moon. Members of the Antenna and Mechanical Design Group, Research Section, and other Laboratory groups contributed, too.
No attempt was made to design major components specifically for the experiment. The selection of the receiver, transmitter, and antenna was made from equipment already on hand, including a special crystal-controlled receiver and transmitter designed for the Signal Corps by radio pioneer Edwin H. Armstrong. Crystal control provided frequency stability, and the apparatus provided the power and bandwidth needed. The relative velocities of the Earth and the Moon caused the return signal to differ from the transmitted signal by as much as 300 Hz, a phenomenon known as Doppler shift. The narrow-band receiver permitted tuning to the exact radio frequency of the returning echo. As DeWitt later recalled: "We realized that the moon echoes would be very weak so we had to use a very narrow receiver bandwidth to reduce thermal noise to tolerable levels....We had to tune the receiver each time for a slightly different frequency from that sent out because of the Doppler shift due to the earth's rotation and the radial velocity of the moon at the time."19
The echoes were received both visually, on a nine-inch cathode-ray tube, and acoustically, as a 180 Hz beep. The aerial was a pair of "bedspring" antennas from an SCR-271 stationary radar positioned side by side to form a 32-dipole array antenna and mounted on a 30-meter (100-ft) tower. The antenna had only azimuth control; it had not been practical to secure a better mechanism. Hence, experiments were limited to the rising and setting of the Moon.
The Signal Corps tried several times, but without success. "The equipment was very haywire," recalled DeWitt. Finally, at moonrise, 11:48 A.M., on 10 January 1946, they aimed the antenna at the horizon and began transmitting. Ironically, DeWitt was not present: "I was over in Belmar having lunch and picking up some items like cigarettes at the drug store (stopped smoking 1952 thank God)." 20 The first signals were detected at 11:58 A.M., and the experiment was concluded at 12:09 P.M., when the Moon moved out of the radar's range. The radio waves had taken about 2.5 seconds to travel from New Jersey to the Moon and back, a distance of over 800,000 km. The experiment was repeated daily over the next three days and on eight more days later that month.
The War Department withheld announcement of the success until the night of 24 January 1946. By then, a press release explained, "the Signal Corps was certain beyond doubt that the experiment was successful and that the results achieved were pain-stakingly [sic] verified."21
As DeWitt recounted years later: "We had trouble with General Van Deusen our head of R&D in Washington. When my C.O. Col. Victor Conrad told him about it over the telephone the General did not want the story released until it was confirmed by outsiders for fear it would embarrass the Sig[nal]. C[orps]." Two outsiders from the Radiation Laboratory, George E. Valley, Jr. and Donald G. Fink, arrived and, with Gen. Van Deusen, observed a moonrise test of the system carried out under the direction of King Stodola. Nothing happened. DeWitt explained: "You can imagine that at this point I was dying. Shortly a big truck passed by on the road next to the equipment and immediately the echoes popped up. I will always believe that one of the crystals was not oscillating until it was shaken up or there was a loose connection which fixed itself. Everyone cheered except the General who tried to look pleased." 22
Although he had had other motives for undertaking Project Diana, DeWitt had received a directive from the Chief Signal Officer, the head of the Signal Corps, to develop radars capable of detecting missiles coming from the Soviet Union. No missiles were available for tests, so the Moon experiment stood in their place. Several years later, the Signal Corps erected a new 50-ft (15 meters) Diana antenna and 108-MHz transmitter for ionospheric research. It carried out further lunar echo studies and participated in the tracking of Apollo launches. 23
The news also hit the popular press. The implications of the Signal Corps experiment were grasped by the War Department, although Newsweek cynically cast doubt on the War Department's predictions by calling them worthy of Jules Verne. Among those War Department predictions were the accurate topographical mapping of the Moon and planets, measurement and analysis of the ionosphere, and radio control from Earth of "space ships" and "jet or rocket-controlled missiles, circling the Earth above the stratosphere." Time reported that Diana might provide a test of Albert Einstein's Theory of Relativity. In contrast to the typically up-beat mood of Life, both news magazines were skeptical, and rightly so; yet all of the predictions made by the War Department, including the relativity test, have come true in the manner of a Jules Verne novel. 24
Less than a month after DeWitt's initial experiment, a radar in Hungary replicated his results. The Hungarian apparatus differed from that of DeWitt in one key respect; it utilized a procedure, called integration, that was essential to the first attempt to bounce radar waves off Venus and that later became a standard planetary radar technique. The procedure's inventor was Hungarian physicist Zoltán Bay.
Bay graduated with highest honors from Budapest University with a Ph.D. in physics in 1926. Like many Hungarian physicists before him, Bay spent several years in Berlin on scholarships, doing research at both the prestigious Physikalisch-Technische-Reichanstalt and the Physikalisch-Chemisches-Institut of the University of Berlin. The results of his research tour of Berlin earned Bay the Chair of Theoretical Physics at the University of Szeged (Hungary), where he taught and conducted research on high intensity gas discharges.
Bay left the University of Szeged when the United Incandescent Lamps and Electric Company (Tungsram) invited him to head its industrial research laboratory in Budapest. Tungsram was the third largest manufacturer of incandescent lamps, radio tubes, and radio receivers in Europe and supplied a fifth of all radio tubes. As laboratory head, Zoltán Bay oversaw the improvement of high-intensity gas discharge lamps, fluorescent lamps, radio tubes, radio receiver circuitry, and decimeter radio wave techniques.25
Although Hungary sought to stay out of the war through diplomatic maneuvering, the threat of a German invasion remained real. In the fall of 1942, the Hungarian Minister of Defense asked Bay to organize an early-warning system. He achieved that goal, though the Germans occupied Hungary anyway. In March 1944, Bay recommended using the radar for scientific experimentation, including the detection of radar waves bounced off the Moon. The scientific interest in the experiment arose from the opportunity to test the theoretical notion that short wavelength radio waves could pass through the ionosphere without considerable absorption or reflection. Bay's calculations, however, showed that the equipment would be incapable of detecting the signals, since they would be significantly below the receiver's noise level.
The critical difference between the American and Hungarian apparatus was frequency stability, which DeWitt achieved through crystal control in both the transmitter and receiver. Without frequency stability, Bay had to find a means of accommodating the frequency drifts of the transmitter and receiver and the resulting inferior signal-to-noise ratio. He chose to boost the signal-to-noise ratio. His solution was both ingenious and far-reaching in its impact.
Bay devised a process he called cumulation, which is known today as integration. His integrating device consisted of ten coulometers, in which electric currents broke down a watery solution and released hydrogen gas. The amount of gas released was directly proportional to the quantity of electric current. The coulometers were connected to the output of the radar receiver through a rotating switch. The radar echoes were expected to return from the Moon in less than three seconds, so the rotating switch made a sweep of the ten coulometers every three seconds. The release of hydrogen gas left a record of both the echo signal and the receiver noise. As the number of signal echoes and sweeps of the coulometers added up, the signal-to-noise ratio improved. By increasing the total number of signal echoes, Bay believed that any signal could be raised above noise level and made observable, regardless of its amplitude and the value of the signal-to-noise ratio.26 Because the signal echoes have a more-or-less fixed structure, and the noise varies from pulse to pulse, echoes add up faster than noise.
Despite the conceptual breakthrough of the coulometer integrator, the construction and testing of the apparatus remained to be carried out. The menace of air raids drove the Tungsram research laboratory into the countryside in the fall of 1944. The subsequent siege of Budapest twice interrupted the work of Bay and his team until March 1945. The Ministry of Defense furnished Bay with war surplus parts for a 2.5-meter (120-MHz) radar manufactured by the Standard Electrical Co., a Hungarian subsidiary of IT&T. Work was again interrupted when the laboratory was dismantled and all equipment, including that for the lunar radar experiment, was carried off to the Soviet Union. For a third time, construction of entirely new equipment started in the workshops of the Tungsram Research Laboratory, beginning August 1945 and ending January 1946.
Electrical disturbances in the Tungsram plant were so great that measurements and tuning had to be done in the late afternoon or at night. The experiments were carried out on 6 February and 8 May 1946 at night by a pair of researchers. Without the handicap of operating in a war zone, Bay probably would have beaten the Signal Corps to the Moon, although he could not have been aware of DeWitt's experiment. More importantly, though, he invented the technique of long-time integration generally used in radar astronomy. As the American radio astronomers Alex G. Smith and Thomas D. Carr wrote some years later: "The additional tremendous increase in sensitivity necessary to obtain radar echoes from Venus has been attained largely through the use of long-time integration techniques for detecting periodic signals that are far below the background noise level. The unique method devised by Bay in his pioneer lunar radar investigations is an example of such a technique."27
Both Zoltán Bay and John DeWitt had fired shots heard round the world, but there was no revolution, although others either proposed or attempted lunar radar experiments in the years immediately following World War II. Each man engaged in other projects shortly after completing his experiment. Bay left Hungary for the United States, where he taught at George Washington University and worked for the National Bureau of Standards, while DeWitt re-entered radio broadcasting and pursued his interest in astronomy.28
As an ongoing scientific activity, radar astronomy did not begin with the spectacular and singular experiments of DeWitt and Bay, but with an interest in meteors shared by researchers in Britain, Canada, and the United States. Big Science, that is, ionospheric physics and secure military communications, largely motivated that research. Moreover, just as the availability of captured V-2 parts made possible rocket-based ionospheric research after the war,29 so war-surplus radars facilitated the emergence of radar astronomy. Like the exploration of the ionosphere with rockets, radar astronomy was driven by the availability of technology.
Radar meteor studies, like much of radar history, grew out of ionospheric research. In the 1930s, ionospheric researchers became interested in meteors when it was hypothesized that the trail of electrons and ions left behind by falling meteors caused fluctuations in the density of the ionosphere.30 Edward Appleton and others with the Radio Research Board of the British Department of Scientific and Industrial Research, the same organization with which Watson-Watt had been associated, used war-surplus radar furnished by the Air Ministry to study meteors immediately after World War II. They concluded that meteors caused abnormal bursts of ionization as they passed through the ionosphere.31
During the war, the military had investigated meteor trails with radar. When the Germans started bombarding London with V2 rockets, the Army's gun-laying radars were hastily pressed into service to detect the radar reflections from the rockets during their flight in order to give some warning of their arrival. In many cases alarms were sounded, but no rockets were aloft. James S. Hey, a physicist with the Operational Research Group, was charged with investigating these mistaken sightings. He believed that the false echoes probably originated in the ionosphere and might be associated with meteors.
Hey began studying the impact of meteors on the ionosphere in October 1944, using Army radar equipment at several locations until the end of the war. The Operational Research Group, Hey, G. S. Stewart (electrical engineer), S. J. Parsons (electrical and mechanical engineer), and J. W. Phillips (mathematician), found a correlation between visual sightings and radar echoes during the Giacobinid meteor shower of October 1946. Moreover, by using an improved photographic technique that better captured the echoes on the radar screen, they were able to determine the velocity of the meteors.
Neither Hey nor Appleton pursued their radar investigations of meteors. During the war, Hey had detected radio emissions from the Sun and the first discrete source of radio emission outside the solar system in the direction of Cygnus. He left the Operational Research Group for the Royal Radar Establishment at Malvern, where he and his colleagues carried on research in radio astronomy. Appleton, by 1946 a Nobel Laureate and Secretary of the Department of Scientific and Industrial Research, also became thoroughly involved in the development of radio astronomy and became a member of the Radio Astronomy Committee of the Royal Astronomical Society in 1949.32
Instead, radar astronomy gained a foothold in Britain at the University of Manchester under A. C. (later Sir) Bernard Lovell, director of the University's Jodrell Bank Experimental Station. During the war, Lovell had been one of many scientists working on microwave radar.33 His superior, the head of the Physics Department, was Patrick M. S. Blackett, a member of the Committee for the Scientific Survey of Air Defence that approved Watson-Watt's radar memorandum. With the help of Hey and Parsons, Lovell borrowed some Army radar equipment. Finding too much interference in Manchester, he moved to the University's botanical research gardens, which became the Jodrell Bank Experimental Station. Lovell equipped the station with complete war-surplus radar systems, such as a 4.2-meter gun-laying radar and a mobile Park Royal radar. He purchased at rock-bottom prices or borrowed the radars from the Air Ministry, Army, and Navy, which were discarding the equipment down mine shafts.
Originally, Lovell wanted to undertake research on cosmic rays, which had been Blackett's interest, too. One of the primary research objectives of the Jodrell Bank facility, as well as one of the fundamental reasons for its founding, was cosmic ray research. Indeed, the interest in cosmic ray research also lay behind the design and construction of the 76-meter (250-ft) Jodrell Bank telescope. The search for cosmic rays never succeeded, however; Blackett and Lovell had introduced a significant error into their initial calculations.
Fortuitously, though, in the course of looking for cosmic rays, Lovell came to realize that they were receiving echoes from meteor ionization trails, and his small group of Jodrell Bank investigators began to concentrate on this more fertile line of research. Nicolai Herlofson, a Norwegian meteorologist who had recently joined the Department of Physics, put Lovell in contact with the director of the Meteor Section of the British Astronomical Association, J. P. Manning Prentice, a lawyer and amateur astronomer with a passion for meteors. Also joining the Jodrell Bank team was John A. Clegg, a physics teacher whom Lovell had known during the war. Clegg was a doctoral candidate at the University of Manchester and an expert in antenna design. He remained at Jodrell Bank until 1951 and eventually landed a position teaching physics in Nigeria. Clegg converted an Army searchlight into a radar antenna for studying meteors.34
The small group of professional and amateur scientists began radar observations of the Perseid meteor showers in late July and August 1946. When Prentice spotted a meteor, he shouted. His sightings usually, though not always, correlated with an echo on the radar screen. Lovell thought that the radar echoes that did not correlate with Prentice's sightings might have been ionization trails created by cosmic ray showers. He did not believe, initially, that the radar might be detecting meteors too small to be seen by the human eye.
The next opportunity for a radar study of meteors came on the night of 9 October 1946, when the Earth crossed the orbit of the Giacobini-Zinner comet. Astronomers anticipated a spectacular meteor shower. A motion picture camera captured the radar echoes on film. The shower peaked around 3 A.M.; a radar echo rate of nearly a thousand meteors per hour was recorded. Lovell recalled that "the spectacle was memorable. It was like a great array of rockets coming towards one."35
The dramatic correlation of the echo rate with the meteors visible in the sky finally convinced Lovell and everyone else that the radar echoes came from meteor ionization trails, although it was equally obvious that many peculiarities needed to be investigated. The Jodrell Bank researchers learned that the best results were obtained when the aerial was positioned at a right angle to the radiant, the point in the sky from which meteor showers appear to emanate. When the aerial was pointed at the radiant, the echoes on the cathode-ray tube disappeared almost completely.36
Next joining the Jodrell Bank meteor group, in December 1946, was a doctoral student from New Zealand, Clifton D. Ellyett, followed in January 1947 by a Cambridge graduate, John G. Davies. Nicolai Herlofson developed a model of meteor trail ionization that Davies and Ellyett used to calculate meteor velocities based on the diffraction pattern produced during the formation of meteor trails. Clegg devised a radar technique for determining their radiant.37
At this point, the Jodrell Bank investigators had powerful radar techniques for studying meteors that were unavailable elsewhere, particularly the ability to detect and study previously unknown and unobservable daytime meteor showers. Lovell and his colleagues now became aware of the dispute over the nature of meteors and decided to attempt its resolution with these techniques.38
Astronomers specializing in meteors were concerned with the nature of sporadic meteors. One type of meteor enters the atmosphere from what appears to be a single point, the radiant. Most meteors, however, are not part of a shower, but appear to arrive irregularly from all directions and are called sporadic meteors. Most astronomers believed that sporadic meteors came from interstellar space; others argued that they were part of the solar system.
The debate could be resolved by determining the paths of sporadic meteors. If they followed parabolic or elliptical paths, they orbited the Sun; if their orbit were hyperbolic, they had an interstellar origin. The paths of sporadic meteors could be determined by an accurate measurement of both their velocities and radiants, but optical means were insufficiently precise to give unambiguous results. Fred L. Whipple, future director of the Harvard College Observatory, a leading center of United States meteor research, attempted state-of-the-art optical studies of meteors with the Super Schmidt camera, but the first one was not operational until May 1951, at Las Cruces, New Mexico.39
Radar astronomers, then, attempted to accomplish what optical methods had failed to achieve. Such has been the pattern of radar astronomy to the present. Between 1948 and 1950, Lovell, Davies, and Mary Almond, a doctoral student, undertook a long series of sporadic meteor velocity measurements. They found no evidence for a significant hyperbolic velocity component; that is, there was no evidence for sporadic meteors coming from interstellar space. They then extended their work to fainter and smaller meteors with similar results.
The Jodrell Bank radar meteor studies determined unambiguously that meteors form part of the solar system. As Whipple declared in 1955, "We may now accept as proven the fact that bodies moving in hyperbolic orbits about the sun play no important role in producing meteoric phenomena brighter than about the 8th effective magnitude."40 Astronomers describe the brightness of a body in terms of magnitude; the larger the magnitude, the fainter the body.
The highly convincing evidence of the Jodrell Bank scientists was corroborated by Canadian radar research carried out by researchers of the Radio and Electrical Engineering Division of the National Research Council under Donald W. R. McKinley. McKinley had joined the Council's Radio Section (later Branch) before World War II and, like Lovell, had participated actively in wartime radar work.
McKinley conducted his meteor research with radars built around Ottawa in 1947 and 1948 as part of various National Research Council laboratories, such as the Flight Research Center at Arnprior Airport. Earle L. R. Webb, Radio and Electrical Engineering Division of the National Research Council, supervised the design, construction, and operation of the radar equipment. From as early as the summer of 1947, the Canadian radar studies were undertaken jointly with Peter M. Millman of the Dominion Observatory. They coordinated spectrographic, photographic, radar, and visual observations. The National Research Council investigators employed the Jodrell Bank technique to determine meteor velocities, a benefit of following in the footsteps of the British.41
Their first radar observations took place during the Perseid shower of August 1947, as the first radar station reached completion. Later studies collected data from the Geminid shower of December 1947 and the Lyrid shower of April 1948, with more radar stations brought into play as they became available. Following the success of Jodrell Bank, McKinley's group initiated their own study of sporadic meteors. By 1951, with data on 10,933 sporadic meteors, McKinley's group reached the same conclusion as their British colleagues: meteors were part of the solar system. Soon, radar techniques became an integral part of Canadian meteor research with the establishment in 1957 of the National Research Council Springhill Meteor Observatory outside Ottawa. The Observatory concentrated on scientific meteor research with radar, visual, photographic, and spectroscopic methods.42
These meteor studies at Jodrell Bank and the National Research Council, and only at those institutions, arose from the union of radar and astronomy; they were the beginnings of radar astronomy. Radar studies of meteors were not limited to Jodrell Bank and the National Research Council, however. With support from the National Bureau of Standards, in 1957 Harvard College Observatory initiated a radar meteor project under the direction of Fred Whipple. Furthermore, radar continues today as an integral and vital part of worldwide meteor research. Its forte is the ability to determine orbits better than any other technique. In the last five years, a number of recently built radars have studied meteors in Britain (MST Radar, Aberytswyth, Wales), New Zealand (AMOR, Meteor Orbit Radar, Christchurch), and Japan (MU Radar, Shigaraki), not to mention earlier work in Czechoslovakia and Sweden.43
Unlike the Jodrell Bank and National Research Council cases, the radar meteor studies started in the United States in the early 1950s were driven by civilian scientists doing ionospheric and communications research and by the military's desire for jam-proof, point-to-point secure communications. While various military laboratories undertook their own research programs, most of the civilian U.S. radar meteor research was carried out at Stanford University and the National Bureau of Standards, where investigators fruitfully cross-fertilized ionospheric and military communications research. The Stanford case is worth examining not only for its later connections to radar astronomy, but also for its pioneering radar study of the Sun that arose out of an interest in ionospheric and radio propagation research.
In contrast to the Stanford work, many radar meteor experiments carried out in the United States in the 1940s were unique events. As early as August and November 1944, for instance, workers in the Federal Communications Commission Engineering Department associated visual observations of meteors and radio bursts. In January 1946, Oliver Perry Ferrell of the Signal Corps reported using a Signal Corps SCR-270B radar to detect meteor ionization trails.44 The major radar meteor event in the United States and elsewhere, however, was the spectacular meteor shower associated with the Giacobini-Zinner comet.
On the night of 9 October 1946, 21 Army radars were aimed toward the sky in order to observe any unusual phenomena. The Signal Corps organized the experiment, which fit nicely with their mission of developing missile detection and ranging capabilities. The equipment was operated by volunteer crews of the Army ground forces, the Army Air Forces, and the Signal Corps located across the country in Idaho, New Mexico, Texas, and New Jersey. For mainly meteorological reasons, only the Signal Corps SCR-270 radar successfully detected meteor ionization trails. No attempt was made to correlate visual observations and radar echoes. A Princeton University undergraduate, Francis B. Shaffer, who had received radar training in the Navy, analyzed photographs of the radar screen echoes at the Signal Corps laboratory in Belmar, New Jersey.
This was the first attempt to utilize microwave radars to detect astronomical objects. The equipment operated at 1,200 MHz (25 cm), 3,000 MHz (10 cm), and 10,000 MHz (3 cm), frequencies in the L, S, and X radar bands that radar astronomy later used. "On the basis of this night's experiments," the Signal Corps experimenters decided, "we cannot conclude that microwave radars do not detect meteor-formed ion clouds."45
In contrast to the Signal Corps experiment, radar meteor studies formed part of ongoing research at the National Bureau of Standards. Organized from the Bureau's Radio Section in May 1946 and located at Sterling, Virginia, the Central Radio Propagation Laboratory (CRPL) division had three laboratories, one of which concerned itself exclusively with ionospheric research and radio propagation and was especially interested in the impact of meteors on the ionosphere. In October 1946, Victor C. Pineo and others associated with the CRPL used a borrowed SCR-270-D Signal Corps radar to observe the Giacobinid meteor shower. Over the next five years, Pineo continued research on the effects of meteors on the ionosphere, using a standard ionospheric research instrument called an ionosonde and publishing his results in Science.
Pineo's interest was in ionospheric physics, not astronomy. Underwriting his research at the Ionospheric Research Section of the National Bureau of Standards was the Air Force Cambridge Research Center (known later as the Cambridge Research Laboratories and today as Phillips Laboratory). His meteor work did not contribute to knowledge about the origin of meteors, as such work had in Britain and Canada, but it supported efforts to create secure military communications using meteor ionization trails.46 Also, it related to similar research being carried out concurrently at Stanford University.
The 1946 CRPL experiment, in fact, had been suggested by Robert A. Helliwell of the Stanford Radio Propagation Laboratory (SRPL). Frederick E. Terman, who had headed the Harvard Radio Research Laboratory and its radar countermeasures research during the war, "virtually organized radio and electronic engineering on the West Coast" as Stanford Dean of Engineering, according to historian C. Stewart Gillmor. Terman negotiated a contract with the three military services for the funding of a broad range of research, including the SRPL's long-standing ionospheric research program.47
Helliwell, whose career was built on ionospheric research, was joined at the SRPL by Oswald G. Villard, Jr. Villard had earned his engineering degree during the war for the design of an ionosphere sounder. As an amateur radio operator in Cambridge, Massachusetts, he had noted the interference caused by meteor ionizations at shortwave frequencies called Doppler whistles.48
In October 1946, during the Giacobinid meteor shower, Helliwell, Villard, Laurence A. Manning, and W. E. Evans, Jr., detected meteor ion trails by listening for Doppler whistles with radios operating at 15 MHz (20 meters) and 29 MHz (10 meters). Manning then developed a method of measuring meteor velocities using the Doppler frequency shift of a continuous-wave signal reflected from the ionization trail. Manning, Villard, and Allen M. Peterson then applied Manning's technique to a continuous-wave radio study of the Perseid meteor shower in August 1948. The initial Stanford technique was significantly different from that developed at Jodrell Bank; it relied on continuous-wave radio, rather than pulsed radar, echoes.49
One of those conducting meteor studies at Stanford was Von R. Eshleman, a graduate student in electrical engineering who worked under both Manning and Villard. While serving in the Navy during World War II, Eshleman had studied, then taught, radar at the Navy's radar electronics school in Washington, DC. In 1946, while returning from the war on the U.S.S. Missouri, Eshleman unsuccessfully attempted to bounce radar waves off the Moon using the ship's radar. Support for his graduate research at Stanford came through contracts between the University and both the Office of Naval Research and the Air Force.
Eshleman's dissertation considered the theory of detecting meteor ionization trails and its application in actual experiments. Unlike the British and Canadian meteor studies, the primary research interest of Eshleman, Manning, Villard, and the other Stanford investigators was information about the winds and turbulence in the upper atmosphere. Their investigations of meteor velocities, the length of ionized meteor trails, and the fading and polarization of meteor echoes were part of that larger research interest, while Eshleman's dissertation was an integral part of the meteor research program.
Eshleman also considered the use of meteor ionization trails for secure military communications. His dissertation did not explicitly state that application, which he took up after completing the thesis. The Air Force supported the Stanford meteor research mainly to use meteor ionization trails for secure, point-to-point communications. The Stanford meteor research thus served a variety of scientific and military purposes simultaneously.50
The meteor research carried out at Stanford had nontrivial consequences. Eshleman's dissertation has continued to provide the theoretical foundation of modern meteor burst communications, a communication mode that promises to function even after a nuclear holocaust has rendered useless all normal wireless communications. The pioneering work at Stanford, the National Bureau of Standards, and the Air Force Cambridge Research Laboratories received new attention in the 1980s, when the Space Defense Initiative ("Star Wars") revitalized interest in using meteor ionization trails for classified communications. Non-military applications of meteor burst communications also have arisen in recent years. 51
Early meteor burst communications research was not limited to Stanford and the National Bureau of Standards. American military funding of early meteor burst communications research extended beyond its shores to Britain. Historians of Jodrell Bank radio astronomy and meteor radar research stated that radio astronomy had surpassed meteor studies at the observatory by 1955. However, that meteor work persisted until 1964 through a contract with the U.S. Air Force, though as a cover for classified military research.52
Auroras provided additional radar targets in the 1950s. A major initiator of radar auroral studies was Jodrell Bank. As early as August 1947, while conducting meteor research, the Jodrell Bank scientists Lovell, Clegg, and Ellyett received echoes from an aurora display. Arnold Aspinall and G. S. Hawkins then continued the radar auroral studies at Jodrell Bank in collaboration with W. B. Housman, Director of the Aurora Section of the British Astronomy Association, and the aurora observers of that Section. In Canada, McKinley and Millman also observed an aurora during their meteor research in April 1948.53
The problem with bouncing radar waves off an aurora was determining the reflecting point. Researchers in the University of Saskatchewan Physics Department (B. W. Currie, P. A. Forsyth, and F. E. Vawter) initiated a systematic study of auroral radar reflections in 1948, with funding from the Defense Research Board of Canada. Radar equipment was lent by the U.S. Air Force Cambridge Research Center and modified by the Radio and Electrical Engineering Division of the National Research Council. Forsyth had completed a dissertation on auroras at McGill University and was an employee of the Defense Research Board's Telecommunications Establishment on loan to the University of Saskatchewan for the project. The Saskatchewan researchers discovered that the echoes bounced off small, intensely ionized regions in the aurora.54
Other aurora researchers, especially in Sweden and Norway, took up radar studies. In Sweden, Götha Hellgren and Johan Meos of the Chalmers University of Technology Research Laboratory of Electronics in Gothenburg decided to conduct radar studies of auroras as part of their ionospheric research program. Beginning in May 1951, the Radio Wave Propagation Laboratory of the Kiruna Geophysical Observatory undertook round-the-clock observations of auroras with a 30.3-MHz (10-meter) radar. In Norway, Leiv Harang, who had observed radar echoes from an aurora as early as 1940, and B. Landmark observed auroras with radars lent by the Norwegian Defense Research Establishment and installed at Oslo (Kjeller) and Tromsö, where a permanent center for radar investigation of auroras was created later.55
These and subsequent radar investigations changed the way scientists studied auroras, which had been almost entirely by visual means up to about 1950. Permanent auroral observatories located at high latitudes, such as those at Oslo and Tromsö in Norway, at Kiruna in Sweden, and at Saskatoon in Saskatchewan, integrated radar into a spectrum of research instruments that included spectroscopy, photography, balloons, and sounding rockets. The International Geophysical Year, 1957-1958, was appropriately timed to further radar auroral research; it coincided with extremely high sunspot and auroral activity, such as the displays visible from Mexico in September 1957 and the "Great Red Aurora" of 10 February 1958. Among those participating in the radar aurora and meteor studies associated with the International Geophysical Year activities were three Jodrell Bank students and staff who joined the Royal Society expedition to Halley Bay, Antarctica.56
The auroral and meteor radar studies carried out in the wake of the lunar radar experiments of DeWitt and Bay were, in essence, ionospheric studies. While the causes of auroras and meteor ionization trails arise outside the Earth's atmosphere, the phenomena themselves are essentially ionospheric. At Jodrell Bank, meteor and auroral studies provided the initial impetus, but certainly not the sustaining force, for the creation of an ongoing radar astronomy program. That sustaining force came from lunar studies. However, like so much of early radar astronomy, those lunar studies were never far from ionospheric research. Indeed, the trailblazing efforts of DeWitt and Bay opened up new vistas of ionospheric and communications research using radar echoes from the Moon.
Historically, scientists had been limited to the underside and lower portion of the ionosphere. The discovery of "cosmic noise" by Bell Telephone researcher Karl Jansky in 1932 suggested that higher frequencies could penetrate the ionosphere. The experiments of DeWitt and Bay suggested radar as a means of penetrating the lower regions of the ionosphere. DeWitt, moreover, had observed unexpected fluctuations in signal strength that lasted several minutes, which he attributed to anomalous ionospheric refraction.57 His observations invited further investigation of the question.
The search for a better explanation of those fluctuations was taken up by a group of ionosphericists in the Division of Radiophysics of the Australian Council for Scientific and Industrial Research: Frank J. Kerr, C. Alex Shain, and Charles S. Higgins. In 1946, Kerr and Shain explored the possibility of obtaining radar echoes from meteors, following the example of Lovell in Britain, but Project Diana turned their attention toward the Moon. In order to study the fluctuations in signal strength that DeWitt had observed, Kerr, Shain, and Higgins put together a rather singular experiment.
For a transmitter, they used the 20-MHz (15-meter) Radio Australia station, located in Shepparton, Victoria, when it was not in use for regular programming to the United States and Canada. The receiver was located at the Radiophysics Laboratory, Hornsby, New South Wales, a distance of 600 km from the transmitter. Use of this unique system was limited to days when three conditions could be met all at the same time: the Moon was passing through the station's antenna beams; the transmitter was available; and atmospheric conditions were favorable. In short, the system was workable about twenty days a year.58
Kerr, Shain, and Higgins obtained lunar echoes on thirteen out of fifteen attempts. The amplitude of the echoes fluctuated considerably over the entire run of tests as well as within a single test. Researchers at IT&T's Federal Telecommunications Laboratories in New York City accounted for the fluctuations observed by DeWitt by positing the existence of smooth spots that served as "bounce points" for the reflected energy. Another possibility they imagined was the existence of an ionosphere around the Moon.59 The Australians disagreed with the explanations offered by DeWitt and the IT&T researchers, but they were initially cautious: "It cannot yet be said whether the reductions in intensity and the long-period variations are due to ionospheric, lunar or inter-planetary causes."60
During a visit to the United States in 1948, J. L. Pawsey, a radio astronomy enthusiast also with the Council for Scientific and Industrial Research's Division of Radiophysics, arranged a cooperative experiment with the Americans. A number of U.S. organizations with an interest in radio, the National Bureau of Standards CRPL, the Radio Corporation of America (Riverhead, New York), and the University of Illinois (Urbana), attempted to receive Moon echoes simultaneously from Australia, beginning 30 July 1948. Ross Bateman (CRPL) acted as American coordinator. The experiment was not a great success. The times of the tests (limited by transmitter availability) were all in the middle of the day at the receiving points. Echoes were received in America on two occasions, 1 August and 28 October, and only for short periods in each case.
Meanwhile, Kerr and Shain continued to study lunar echo fading with the Radio Australia transmitter. Based on thirty experiments (with echoes received in twenty-four of them) conducted over a year, they now distinguished rapid and slow fading. Kerr and Shain proposed that each type of fading had a different cause. Rapid fading resulted from the Moon's libration, a slow wobbling motion of the Moon. Irregular movement in the ionosphere, they originally suggested, caused the slower fading.61 Everyone agreed that the rapid fading of lunar radar echoes originated in the lunar libration, but the cause of slow fading was not so obvious.
The problem of slow fading was taken up at Jodrell Bank by William A. S. Murray and J. K. Hargreaves, who sought an explanation in the ionosphere. Although Lovell had proposed undertaking lunar radar observations as early as 1946, the first worthwhile results were not obtained until the fall of 1953. Hargreaves and Murray photographed and analyzed some 50,000 lunar radar echoes at the Jodrell Bank radar telescope in October and November 1953 to determine the origin of slow fading.
With rare exceptions, nighttime runs showed a steady signal amplitude, while daytime runs, especially those within a few hours of sunrise, were marked by severe fading. The high correlation between fading and solar activity strongly suggested an ionospheric origin. However, Hargreaves and Murray believed that irregularities in the ionosphere could not account for slow fading over periods lasting up to an hour. They suggested instead that slow fading resulted from Faraday rotation, in which the plane of polarization of the radio waves rotated, as they passed through the ionosphere in the presence of the Earth's magnetic field.
Hargreaves and Murray carried out a series of experiments to test their hypothesis in March 1954. The transmitter had a horizontally polarized antenna, while the primary feed of the receiving antenna consisted of two dipoles mounted at right angles. They switched the receiver at short intervals between the vertical and horizontal feeds so that echoes would be received in both planes of polarization, a technique that is a standard planetary radar practice today.
As the plane of polarization of the radar waves rotated in the ionosphere, stronger echo amplitudes were received by the vertical feed than by the horizontal feed. If no Faraday rotation had taken place, both the transmitted and received planes of polarization would be the same, that is, horizontal. But Faraday rotation of the plane of polarization in the ionosphere had rotated the plane of polarization so that the vertical feed received more echo power than the horizontal feed. The results confirmed that slow fading was caused, at least in part, by a change in the plane of polarization of the received lunar echo.62
Murray and Hargreaves soon took positions elsewhere, yet Jodrell Bank continued to feature radar astronomy through the persistence of Bernard Lovell. Lovell became entangled in administrative affairs and the construction of a giant radio telescope, while John V. Evans, a research student of Lovell, took over the radar astronomy program. Evans had a B.Sc. in physics and had had an interest in electronics engineering since childhood. He chose the University of Manchester Physics Department for his doctoral degree, because the department, through Lovell, oversaw the Jodrell Bank facility. The facility's heavy involvement in radio and radar astronomy, when Evans arrived there on his bicycle in the summer of 1954, assured Evans that his interest in electronics engineering would be sated.
With the approval and full support of Lovell, Evans renewed the studies of lunar radar echoes, but first he rebuilt the lunar radar equipment. It was a "poor instrument," Evans later recalled, "and barely got echoes from the Moon." After he increased the power output from 1 to 10 kilowatts and improved the sensitivity of the receiver by rebuilding the front end, Evans took the lunar studies in a new direction. Unlike the majority of Jodrell Bank research, Evans's lunar work was underwritten through a contract with the U.S. Air Force, which was interested in using the Moon as part of a long-distance communications system.
With his improved radar apparatus, Evans discovered that the Moon overall was a relatively smooth reflector of radar waves at the wavelength he used (120 MHz; 2.5 meters). Later, from the way that the Moon appeared to scatter back radar waves, Evans speculated that the lunar surface was covered with small, round objects such as rocks and stones. Hargreaves proposed that radar observations at shorter wavelengths should be able to give interesting statistical information about the features of the lunar surface.63 That idea was the starting point for the creation of planetary radar techniques that would reveal the surface characteristics of planets and other moons.
Experimenters prior to Evans had assumed that the Moon reflected radar waves from the whole of its illuminated surface, like light waves. They debated whether the power returned to the Earth was reflected from the entire visible disk or from a smaller region. The question was important to radar astronomers at Jodrell Bank as well as to military and civilian researchers developing Moon-relay communications.
In March 1957, Evans obtained a series of lunar radar echoes. He photographed both the transmitted pulses and their echoes so that he could make a direct comparison between the two. Evans also made range measurements of the echoes at the same time. In each case, the range of the observed echo was consistent with that of the front edge of the Moon. The echoes came not from the entire visible disk but from a smaller portion of the lunar surface, that closest to the Earth and known as the subradar point.64 This discovery became fundamental to radar astronomy research.
Because radar waves reflected off only the foremost edge of the Moon, Evans and John H. Thomson (a radio astronomer who had transferred from Cambridge in 1959) undertook a series of experiments on the use of the Moon as a passive communication relay. Although initial results were "not intelligible," because FM and AM broadcasts tended to fade, Lovell bounced Evans' "hello" off the Moon with a Jodrell Bank transmitter and receiver during his BBC Reith Lecture of 1958. Several years later, in collaboration with the Pye firm, a leading British manufacturer of electronic equipment headquartered in Cambridge, and with underwriting from the U.S. Air Force, a Pye transmitter at Jodrell Bank was used to send speech and music via the Moon to the Sagamore Hill Radio Astronomy Observatory of the Air Force Cambridge Research Center, at Hamilton, Massachusetts. The U.S. Air Force thus obtained a successful lunar bounce communication experiment at Jodrell Bank for a far smaller sum than that spent by the Naval Research Laboratory.65
The lunar communication studies at Jodrell Bank illustrate that astronomy was not behind all radar studies of the Moon. Much of the lunar radar work, especially in the United States, was performed to test long-distance communication systems in which the Moon would serve as a relay. Thus, the experiments of DeWitt and Bay may be said to have begun the era of satellite communications. Research on Moon-relay communications systems by both military and civilian laboratories eventually drew those institutions into the early organizational activities of radar astronomers. After all, both communication research and radar astronomy shared an interest in the behavior of radio waves at the lunar surface. Hence, a brief look at that research would be informative.
Before the advent of satellites, wireless communication over long distances was achieved by reflecting radio waves off the ionosphere. As transmission frequency increased, the ionosphere was penetrated. Long-distance wireless communication at high frequencies had to depend on a network of relays, which were expensive and technically complex. Using the Moon as a relay appeared to be a low-cost alternative.66
Reacting to the successes of DeWitt and Bay, researchers at the IT&T Federal Telecommunications Laboratories, Inc., New York City, planned a lunar relay telecommunication system operating at UHF frequencies (around 50 MHz; 6 meters) to provide radio telephone communications between New York and Paris. If such a system could be made to work, it would provide IT&T with a means to compete with transatlantic cable carriers dominated by rival AT&T. What the Federal Telecommunications Laboratories had imagined, the Collins Radio Company, Cedar Rapids, Iowa, and the National Bureau of Standards CRPL, accomplished.
On 28 October and 8 November 1951, Peter G. Sulzer and G. Franklin Montgomery, CRPL, and Irvin H. Gerks, Collins Radio, sent a continuous-wave 418-MHz (72-cm) radio signal from Cedar Rapids to Sterling, Virginia, via the Moon. On 8 November, a slowly hand-keyed telegraph message was sent over the circuit several times. The message was the same sent by Samuel Morse over the first U.S. public telegraph line: "What hath God wrought?"67
Unbeknownst to the CRPL/Collins team, the first use of the Moon as a relay in a communication circuit was achieved only a few days earlier by military researchers at the Naval Research Laboratory (NRL). The Navy was interested in satellite communications, and the Moon offered itself as a free (if distant and rough) satellite in the years before an artificial satellite could be launched. In order to undertake lunar communication studies, the NRL built what was then the world's largest parabolic antenna in the summer of 1951. The dish covered over an entire acre (67 by 80 meters; 220 by 263 ft) and had been cut into the earth by road-building machinery at Stump Neck, Maryland. The one-megawatt transmitter operated at 198 MHz (1.5 meters). The NRL first used the Moon as a relay in a radio communication circuit on 21 October 1951. After sending the first voice transmission via the Moon on 24 July 1954, the NRL demonstrated transcontinental satellite teleprinter communication from Washington, DC, to San Diego, CA, at 301 MHz (1 meter) on 29 November 1955 and transoceanic satellite communication, from Washington, DC, to Wahiawa, Oahu, Hawaii, on 23 January 1956.68
Later in 1956, the NRL's Radio Astronomy Branch started a radar program under Benjamin S. Yaplee to determine the feasibility of bouncing microwaves off the Moon and to accurately measure both the Moon's radius and the distances to different reflecting areas during the lunar libration cycle. Aside from the scientific value of that research, the information would help the Navy to determine relative positions on the Earth's surface. The first NRL radar contact with the Moon at a microwave frequency took place at 2860 MHz (10 cm) and was accomplished with the Branch's 15-meter (50-ft) radio telescope.69
Although interest in bouncing radio and radar waves off the Moon drew military and civilian researchers to early radar astronomy conferences, lunar communication schemes failed to provide either a theoretical or a funding framework within which radar astronomy could develop. The rapidly growing field of ionospheric research, on the other hand, provided both theoretical and financial support for radar experiments on meteors and the Moon. Despite the remarkable variety of radar experiments carried out in the years following World War II, radar achieved a wider and more permanent place in ionospheric research (especially meteors and auroras) than in astronomy.
All that changed with the start of the U.S./U.S.S.R. Space Race and the announcement of the first planetary radar experiment in 1958. That experiment was made possible by the rivalries of the Cold War, which fostered a concentration of expertise and financial, personnel, and material resources that paralleled, and in many ways exceeded, that of World War II. The new Big Science of the Cold War and the Space Race, often indistinguishable from each other, gave rise to the radar astronomy of planets.
The Sputnik and Lunik missions were not just surprising demonstrations of Soviet achievements in science and technology. Those probes had been propelled off the Earth by ICBMs, and an ICBM capable of putting a dog in Earth-orbit or sending a probe to the Moon was equally capable of delivering a nuclear bomb from Moscow to New York City. Behind the Space Race lay the specter of the Cold War and World War III, or to paraphrase Clausewitz, the Space Race was the Cold War by other means. Just as the vulnerability of Britain to air attacks had led to the creation of the Chain Home radar warning network, the defenselessness of the United States against aircraft and ICBM attacks with nuclear bombs and warheads led to the creation of a network of defensive radars. The development of that network in turn provided the instrument with which planetary radar astronomy, driven by the availability of technology, would begin in the United States.
1. Guglielmo Marconi, "Radio Telegraphy," Proceedings of the Institute of Radio Engineers 10 (1922): 237.
2. Charles Süsskind, "Who Invented Radar?" Endeavour 9 (1985): 92-96; Henry E. Guerlac, "The Radio Background of Radar," Journal of the Franklin Institute 250 (1950): 284-308.
3. Swords, A Technical History of the Beginnings of Radar (London: Peter Peregrinus Press, 1986), pp. 270-271.
4. Guerlac, "Radio Background," p. 304.
5. Alfred Price, Instruments of Darkness: The History of Electronic Warfare, 2d. ed. (London: MacDonald and Jane's, 1977); Tony Devereux, Messenger Gods of Battle, Radio, Radar, Sonar: The Story of Electronics in War (Washington: Brassey's, 1991); David E. Fisher, A Race on the Edge of Time: Radar - the Decisive Weapon of World War II (New York: McGraw-Hill, 1988).
6. H. Montgomery Hyde, British Air Policy Between the Wars, 1918-1939 (London: Heinemann, 1976), p. 322. See also Malcolm Smith, British Air Strategy Between the Wars (Oxford: Clarendon Press, 1984).
7. Swords, p. 84; Edward G. Bowen, Radar Days (Bristol: Adam Hilger, 1987), pp. 4-5, 7 & 10; Robert Watson-Watt, The Pulse of Radar: The Autobiography of Sir Robert Watson-Watt (New York: Dial Press, 1959), pp. 29-38, 51, 69, 101, 109-110, 113; A. P. Rowe, One Story of Radar (Cambridge: Cambridge University Press, 1948), pp. 6-7; Reg Batt, The Radar Army: Winning the War of the Airwaves (London: Robert Hale, 1991), pp. 21-22. The Radio Research Board was under the Department of Scientific and Industrial Research, created in 1916.
Born Robert Alexander Watson Watt in 1892, he changed his surname to "Watson-Watt" when knighted in 1942. See the popularly-written biography of Watson-Watt, John Rowland, The Radar Man: The Story of Sir Robert Watson-Watt (London: Lutterworth Press, 1963), or Watson-Watt, Three Steps to Victory (London: Odhams Press Ltd., 1957). An account of Watson-Watt's research at Slough is given in Watson-Watt, John F. Herd, and L. H. Bainbridge-Bell, The Cathode Ray Tube in Radio Research (London: His Majesty's Stationery Office, 1933).
8. By "apparent height of the ionosphere," I mean what ionosphericists call virtual height. Since the ionosphere slows radio waves before being refracted back to Earth, the delay is not a true measure of height. The Tuve-Breit method preceded that of Watson-Watt and was a true send-receive technique, while that of Watson-Watt was a receive-only technique.
Tuve, "Early Days of Pulse Radio at the Carnegie Institution," Journal of Atmospheric and Terrestrial Physics 36 (1974): 2079-2083; Oswald G. Villard, Jr., "The Ionospheric Sounder and its Place in the History of Radio Science," Radio Science 11 (1976): 847-860; Guerlac, "Radio Background," pp. 284-308; David H. DeVorkin, Science With a Vengeance: How the Military Created the U.S. Space Sciences after World War II (New York: Springer-Verlag, 1992), pp. 12, 301 & 316; C. Stewart Gillmor, "Threshold to Space: Early Studies of the Ionosphere," in Paul A. Hanle and Von Del Chamberlain, eds., Space Science Comes of Age: Perspectives in the History of the Space Sciences (Washington: National Air and Space Museum, Smithsonian Institution, 1981), pp. 102-104; J. A. Ratcliffe, "Experimental Methods of Ionospheric Investigation, 1925-1955," Journal of Atmospheric and Terrestrial Physics 36 (1974): 2095-2103; Tuve and Breit, "Note on a Radio Method of Estimating the Height of the Conducting Layer," Terrestrial Magnetism and Atmospheric Electricity 30 (1925): 15-16; Breit and Tuve, "A Radio Method of Estimating the Height of the Conducting Layer," Nature 116 (1925): 357; and Breit and Tuve, "A Test of the Existence of the Conducting Layer," Physical Review 2d ser., vol. 28 (1926): 554-575; special issue of Journal of Atmospheric and Terrestrial Physics 36 (1974): 2069-2319, is devoted to the history of ionospheric research.
9. Watson-Watt, Three Steps, p. 83; Ronald William Clark, Tizard (London: Methuen, 1965), pp. 105-127.
10. Swords, pp. 84-85; Bowen, pp. 6, 21, 26 & 28; Batt, pp. 10, 21-22, 69 & 77; Rowe, pp. 8 & 76; R. Hanbury Brown, Boffin: A Personal Story of the Early Days of Radar, Radio Astronomy, and Quantum Optics (Bristol: Adam Hilger, 1991), pp. 7-8; P. S. Hall and R. G. Lee, "Introduction to Radar," in P. S. Hall, T. K. Garland-Collins, R. S. Picton, and R. G. Lee, eds., Radar (London: Brassey's, 1991), pp. 6-7; Watson-Watt, Pulse, pp. 55-59, 64-65, 75, 113-115 & 427-434; Watson-Watt, Three Steps, pp. 83 & 470-474; Bowen, "The Development of Airborne Radar in Great Britain, 1935-1945," in Russell W. Burns, ed. Radar Development to 1945 (London: Peter Peregrinus Press, 1988), pp. 177-188. For a description of the technology, see B. T. Neale, "CH--the First Operational Radar," in Burns, pp. 132-150.
11. Boot and Randall, "Historical Notes on the Cavity Magnetron," IEEE Transactions on Electron Devices ED-23 (1976): 724-729; R. W. Burns, "The Background to the Development of the Cavity Magnetron," in Burns, pp. 259-283.
12. Swords, p. xi.
13. Baxter, Scientists Against Time (Boston: Little, Brown and Company, 1946), p. 142; Swords, pp. 120, 259 & 266; Clark, especially pp. 248-271.
14. Guerlac, Radar in World War II, The History of Modern Physics, 1800-1950, vol. 8 (New York: Tomash Publishers for the American Institute of Physics, 1987), vol. 1, p. 249; Swords, pp. 90 & 119; Batt, pp. 79-80; Bowen, pp. 159-162: Watson Watt, Pulse, pp. 228-229 & 257; Watson-Watt, Three Steps, 293.
In addition to Tizard and Bowen, the Mission team consisted of Prof. J. D. Cockcroft, Col. F. C. Wallace, Army, Capt. H. W. Faulkner, Navy, Capt. F. L. Pearce, Royal Air Force, W. E. Woodward Nutt, Ministry of Aircraft Production, Mission Secretary, Prof. R. H. Fowler, liaison officer for Canada and the United States of the Department of Scientific and Industrial Research, and Col. H. F. G. Letson, Canadian military attache in Washington.
15. Guerlac, Radar in World War II, 1:258-259, 261, 266 & 507-508, and 2:648 & 668. See also the personal reminiscences of Ernest C. Pollard, Radiation: One Story of the MIT Radiation Laboratory (Durham: The Woodburn Press, 1982). Interviews (though not all are transcribed) of some Radiation Laboratory participants are available at the IEEE Center for the History of Electrical Engineering (CHEE), Rutgers University. CHEE, Sources in Electrical History 2: Oral History Collections in U.S. Repositories (New York: IEEE, 1992), pp. 6-7. The British also developed magnetrons and radar equipment operating at microwave frequencies concurrently with the MIT Radiation Laboratory effort.
16. Guerlac, Radar in World War II, 1:247-248 & 117-119. For the Navy, see L. A. Hyland, "A Personal Reminiscence: The Beginnings of Radar, 1930-1934," in Burns, pp. 29-33; Robert Morris Page, The Origin of Radar (Garden City, NY: Anchor Books, Doubleday & Company, 1962); Page, "Early History of Radar in the U.S. Navy," in Burns, pp. 35-44; David Kite Allison, New Eye for the Navy: The Origin of Radar at the Naval Research Laboratory (Washington: Naval Research Laboratory, 1981); Guerlac, Radar in World War II, 1:59-92; Albert Hoyt Taylor, The First Twenty-five Years of the Naval Research Laboratory (Washington: Navy Department, 1948). On the Signal Corps, see Guerlac, Radar in World War II, 1:93-121; Harry M. Davis, History of the Signal Corps Development of U.S. Army Radar Equipment (Washington: Historical Section Field Office, Office of the Chief Signal Officer, 1945); Arthur L. Vieweger, "Radar in the Signal Corps," IRE Transactions on Military Electronics MIL-4 (1960): 555-561.
17. Gillmor, "Geospace and its Uses: The Restructuring of Ionospheric Physics Following World War II," in De Maria, Grilli, and Sebastiani, pp. 75-84, especially pp. 78-79.
18. DeWitt notebook, 21 May 1940, and DeWitt biographical sketch, HL Diana 46 (04), HAUSACEC. There is a rich literature on Jansky's discovery. A good place to start is Woodruff T. Sullivan III, "Karl Jansky and the Discovery of Extraterrestrial Radio Waves," in Sullivan, ed., The Early Years of Radio Astronomy: Reflections Fifty Years after Jansky's Discovery (New York: Cambridge University Press, 1984), pp. 3-42.
19. DeWitt to Trevor Clark, 18 December 1977, HL Diana 46 (04); "Background Information on DeWitt Observatory" and "U.S. Army Electronics Research and Development Laboratory, Fort Monmouth, New Jersey," March 1963, HL Diana 46 (26), HAUSACEC. For published full descriptions of the equipment and experiments, see DeWitt and E. King Stodola, "Detection of Radio Signals Reflected from the Moon," Proceedings of the Institute of Radio Engineers 37 (1949): 229-242; Jack Mofenson, "Radar Echoes from the Moon," Electronics 19 (1946): 92-98; and Herbert Kauffman, "A DX Record: To the Moon and Back," QST 30 (1946): 65-68.
20. DeWitt replies to Clark questions, HL Diana 46 (04), HAUSACEC.
21. HL Radar 46 (07), HAUSACEC; Harold D. Webb, "Project Diana: Army Radar Contacts the Moon," Sky and Telescope 5 (1946): 3-6.
22. DeWitt to Clark, 18 December 1977, HL Diana 46 (04), HAUSACEC; Guerlac, Radar in World War II, 1:380 & 382 and 2:702.
23. DeWitt, telephone conversation, 14 June 1993; Materials in folders HL Diana 46 (25), HL Diana 46 (28), and HL Diana 46 (33), USASEL Research & Development Summary vol. 5, no. 3 (10 February 1958: 58, in "Signal Corps Engineering Laboratory Journal/R&D Summary,"; and Monmouth Message, 7 November 1963, n.p., in "Biographical Files," "Daniels, Fred Bryan," HAUSACEC; Daniels, "Radar Determination of the Scattering Properties of the Moon," Nature 187 (1960): 399; and idem., "A Theory of Radar Reflection from the Moon and Planets," Journal of Geophysical Research 66 (1961): 1781-1788.
24. "Diana," Time col. 47, no. 5 (4 February 1946): 84; "Radar Bounces Echo off the Moon to Throw Light on Lunar Riddle," Newsweek vol. 27, no. 5 (4 February 1946): 76-77; "Man Reaches Moon with Radar," Life vol. 20, no. 5 (4 February 1946): 30.
25. Zoltán Bay, Life is Stronger, trans. Margaret Blakey Hajdu (Budapest: Püski Publisher, 1991), pp. 5 & 17-18; Francis S. Wagner, Zoltán Bay, Atomic Physicist: A Pioneer of Space Research (Budapest: Akadémiai Kiadó, 1985), pp. 23-27, 29, 31-32; Wagner, Fifty Years in the Laboratory: A Survey of the Research Activities of Physicist Zoltán Bay (Center Square, PA: Alpha Publications, 1977), p. 1.
26. Bay, "Reflection of Microwaves from the Moon," Hungarica Acta Physica 1 (1947): 1-6; Bay, Life is Stronger, pp. 20, 29; Wagner, Zoltán, pp. 39-40; Wagner, Fifty Years, pp. 1-2.
27. Smith and Carr, Radio Exploration of the Planetary System (New York: D. Van Nostrand, 1964), p. 123; Bay, "Reflection," pp. 2, 7-15 & 18-19; P. Vajda and J. A. White, "Thirtieth Anniversary of Zoltán Bay's Pioneer Lunar Radar Investigations and Modern Radar Astronomy," Acta Physica Academiae Scientiarum Hungaricae 40 (1976): 65-70; Wagner, Zoltán, pp. 40-41. Bay, Life is Stronger, pp. 103-124, describes the looting and dismantling of the Tungsram works by armed agents of the Soviet Union.
28. DeWitt, telephone conversation, 14 June 1993; DeWitt biographical sketch, HL Diana 46 (04), HAUSACEC; Wagner, Zoltán, p. 49; Wagner, Fifty Years, p. 2.
Among the others were Thomas Gold, Von Eshleman, and A. C. Bernard Lovell. Gold, retired Cornell University professor of astronomy, claims to have proposed a lunar radar experiment to the British Admiralty during World War II; Eshleman, Stanford University professor of electrical engineering, unsuccessfully attempted a lunar radar experiment aboard the U.S.S. Missouri
in 1946, while returning from the war; and Lovell proposed a lunar bounce experiment in a paper of May 1946. Gold 14/12/93, Eshleman 9/5/94, and Lovell, "Astronomer by Chance," manuscript, February 1988, Lovell materials, p. 183.
Even earlier, during the 1920s, the Navy unsuccessfully attempted to bounce a 32-KHz, 500-watt radio signal off the Moon. A. Hoyt Taylor, Radio Reminiscences: A Half Century (Washington: NRL, 1948), p. 133. I am grateful to Louis Brown for pointing out this reference.
29. See DeVorkin, passim.
30. A. M. Skellett, "The Effect of Meteors on Radio Transmission through the Kennelly-Heaviside Layer," Physical Review 37 (1931): 1668; Skellett, "The Ionizing Effect of Meteors," Proceedings of the Institute of Radio Engineers 23 (1935): 132-149. Skellett was a part-time graduate student in astronomy at Princeton University and an employee of Bell Telephone Laboratories, New York City. The research described in this article came out of a study of the American Telegraph and Telephone Company transatlantic short-wave telephone circuits in 1930-1932, and how they were affected by meteor ionization. DeVorkin, p. 275.
31. Appleton and R. Naismith, "The Radio Detection of Meteor Trails and Allied Phenomena," Proceedings of the Physical Society 59 (1947): 461-473; James S. Hey and G. S. Stewart, "Radar Observations of Meteors," Proceedings of the Physical Society 59 (1947): 858; Lovell, Meteor Astronomy (Oxford: Clarendon Press, 1954), pp. 23-24.
32. Hey, The Evolution of Radio Astronomy (New York: Science History Publications, 1973), pp. 19-23 & 33-34; Lovell, The Story of Jodrell Bank (London: Oxford University Press, 1968), p. 5; Hey, Stewart, and S. J. Parsons, "Radar Observations of the Giacobinid Meteor Shower," Monthly Notices of the Royal Astronomical Society 107 (1947): 176-183; Hey and Stewart, "Radar Observations of Meteors," Proceedings of the Physical Society 59 (1947): 858-860 & 881-882; Hey, The Radio Universe (New York: Pergamon Press, 1971), pp. 131-134; Lovell, Meteor Astronomy, pp. 28-29 & 50-52; Peter Robertson, Beyond Southern Skies: Radio Astronomy and the Parkes Telescope (New York: Cambridge University Press, 1992), p. 39; Dudley Saward, Bernard Lovell, a Biography (London: Robert Hale, 1984), pp. 142-145; David O. Edge and Michael J. Mulkay, Astronomy Transformed: The Emergence of Radio Astronomy in Britain (New York: Wiley, 1976), pp. 12-14. For a brief historical overview of the Royal Radar Establishment, see Ernest H. Putley, "History of the RSRE," RSRE Research Review 9 (1985): 165-174; and D. H. Tomin, "The RSRE: A Brief History from Earliest Times to Present Day," IEE Review 34 (1988): 403-407. This major applied science institution deserves a more rigorously researched history.
33. See Lovell, Echoes of War: The Story of H2S Radar (Bristol: Adam Hilger, 1991). Lovell's wartime records are stored at the Imperial War Museum, Lambeth Road, London.
34. Lovell 11/1/94; Lovell, Jodrell Bank, pp. 5-8, 10; Lovell, Meteor Astronomy, pp. 55-63; Edge and Mulkay, pp. 15-16; Saward, pp. 129-131; R. H. Brown and Lovell, "Large Radio Telescopes and their Use in Radio Astronomy," Vistas in Astronomy 1 (1955): 542-560; Blackett and Lovell, "Radio Echoes and Cosmic Ray Showers," Proceedings of the Royal Society of London ser. A, vol. 177 (1941): 183-186; and Lovell, "The Blackett-Eckersley-Lovell Correspondence of World War II and the Origin of Jodrell Bank," Notes and Records of the Royal Society of London 47 (1993): 119-131. For documents relating to equipment on loan from the Ministry of Aviation, the War Office, the Royal Radar Establishment, the Admiralty, and the Air Ministry as late as the 1960s, see 10/51, "Accounts," JBA.
35. Lovell 11/1/94; Lovell, Jodrell Bank, pp. 7-8, 10.
36. Lovell 11/1/94; Lovell, Jodrell Bank, pp. 8-10; Lovell, Clegg, and Congreve J. Banwell, "Radio Echo Observations of the Giacobinid Meteors 1946," Monthly Notices of the Royal Astronomical Society 107 (1947): 164-175. Banwell was a New Zealand veteran of the Telecommunications Research Establishment wartime radar effort and an expert on receiver electronics.
37. Saward, p. 137; Herlofson, "The Theory of Meteor Ionization," Reports on Progress in Physics 11 (1946-47): 444-454; Ellyett and Davies, "Velocity of Meteors Measured by Diffraction of Radio Waves from Trails during Formation," Nature 161 (1948): 596-597; Clegg, "Determination of Meteor Radiants by Observation of Radio Echoes from Meteor Trails," Philosophical Magazine ser. 7, vol. 39 (1948): 577-594; Davies and Lovell, "Radio Echo Studies of Meteors," Vistas in Astronomy 1 (1955): 585-598, provides a summary of meteor research at Jodrell Bank.
38. Lovell, Jodrell Bank, p. 12; Lovell, Meteor Astronomy, pp. 358-383.
39. Ron Doel, "Unpacking a Myth: Interdisciplinary Research and the Growth of Solar System Astronomy, 1920-1958," Ph.D. diss. Princeton University, 1990, pp. 33-35, 42-44 & 108-111; DeVorkin, pp. 96, 273, 278 & 293; Luigi G. Jacchia and Whipple, "The Harvard Photographic Meteor Programme," Vistas in Astronomy 2 (1956): 982-994; Whipple, "Meteors and the Earth's Upper Atmosphere," Reviews of Modern Physics 15 (1943): 246-264; Ibid., "The Baker Super-Schmidt Meteor Cameras," The Astronomical Journal 56 (1951): 144-145, states that the first such camera was installed in New Mexico in May 1951. Determining the origin of meteors was not the primary interest of Harvard research.
40. Whipple, "Some Problems of Meteor Astronomy," in H. C. Van de Hulst, ed., Radio Astronomy (Cambridge: Cambridge University Press, 1957), p. 376; Almond, Davies, and Lovell, "The Velocity Distribution of Sporadic Meteors," Monthly Notices of the Royal Astronomical Society 111 (1951): 585-608; 112 (1952): 21-39; 113 (1953): 411-427. The meteor studies at Jodrell Bank were continued into later years. See, for instance, I. C. Browne and T. R. Kaiser, "The Radio Echo from the Head of Meteor Trails," Journal of Atmospheric and Terrestrial Physics 4 (1953): 1-4.
41. W. E. Knowles Middleton, Radar Development in Canada: The Radio Branch of the National Research Council of Canada, 1939-1946 (Waterloo, Ontario: Wilfred Laurier University Press, 1981), pp. 18, 25, 27, 106-109; Millman and McKinley, "A Note on Four Complex Meteor Radar Echoes," Journal of the Royal Astronomical Society of Canada 42 (1948): 122; McKinley and Millman, "A Phenomenological Theory of Radar Echoes from Meteors," Proceedings of the Institute of Radio Engineers 37 (1949): 364-375; McKinley and Millman, "Determination of the Elements of Meteor Paths from Radar Observations," Canadian Journal of Research A27 (1949): 53-67; McKinley, "Deceleration and Ionizing Efficiency of Radar Meteors," Journal of Applied Physics 22 (1951): 203; McKinley, Meteor Science and Engineering (New York: McGraw-Hill, 1961), p. 20; Lovell, Meteor Astronomy, pp. 52-55.
42. Millman, McKinley, and M. S. Burland, "Combined Radar, Photographic, and Visual Observations of the 1947 Perseid Meteor Shower," Nature 161 (1948): 278-280; McKinley and Millman, "Determination of the Elements," p. 54; Millman and McKinley, "A Note," pp. 121-130; McKinley, "Meteor Velocities Determined by Radio Observations," The Astrophysics Journal 113 (1951): 225-267; F. R. Park, "An Observatory for the Study of Meteors," Engineering Journal 41 (1958): 68-70.
43. Whipple, "Recent Harvard-Smithsonian Meteoric Results," Transactions of the IAU 10 (1960): 345-350; Jack W. Baggaley and Andrew D. Taylor, "Radar Meteor Orbital Structure of Southern Hemisphere Cometary Dust Streams," pp. 33-36 in Alan W. Harris and Edward Bowell, eds., Asteroids, Comets, Meteors 1991 (Houston: Lunar and Planetary Institute, 1992); Baggaley, Duncan I. Steel, and Taylor, "A Southern Hemisphere Radar Meteor Orbit Survey," pp. 37-40 in ibidem; William Jones and S. P. Kingsley, "Observations of Meteors by MST Radar," pp. 281-284 in ibidem; Jun-ichi Wattanabe, Tsuko Nakamura, T. Tsuda, M. Tsutsumi, A. Miyashita, and M. Yoshikawa, "Meteor Mapping with MU Radar," pp. 625-627 in ibidem. The MST Radar and the AMOR were newly commissioned in 1990. The MU Radar is intended primarily for atmospheric research.
For the meteor radar research in Sweden and Czechoslovakia, see B. A. Lindblad and M. Simek, "Structure and Activity of Perseid Meteor Stream from Radar Observations, 1956-1978," pp. 431-434 in Claes-Ingva Lagerkvist and Hans Rickman, eds., Asteroids, Comets, Meteors (Uppsala: Uppsala University, 1983); A. Hajduk and G. Cevolani, "Variations in Radar Reflections from Meteor Trains and Physical Properties of Meteoroids," pp. 527-530 in Lagerkvist, H. Rickman, Lindblad, and M. Lindgren, Asteroids, Comets, Meteors III (Uppsala: Uppsala University, 1989); Simek and Lindblad, "The Activity Curve of the Perseid Meteor Stream as Determined from Short Duration Meteor Radar Echoes," pp. 567-570 in ibidem.
44. Ferrell, "Meteoric Impact Ionization Observed on Radar Oscilloscopes," Physical Review 2d ser., vol. 69 (1946): 32-33; Lovell, Meteor Astronomy, p. 28.
45. Signal Corps Engineering Laboratories, "Postwar Research and Development Program of the Signal Corps Engineering Laboratories, 1945," (Signal Corps, 1945), "Postwar R&D Program," HL R&D, HAUSACEC; John Q. Stewart, Michael Ference, John J. Slattery, Harold A. Zahl, "Radar Observations of the Draconids," Sky and Telescope 6 (March 1947): 3-5. They reported their later results in a paper, "Radar Observations of the Giacobinid Meteors," read before the December 1946 meeting of the American Astronomical Society in Boston. HL Diana 46 (26), HAUSACEC.
46. Wilbert F. Snyder and Charles L. Bragaw, Achievement in Radio: Seventy Years of Radio Science, Technology, Standards, and Measurement at the National Bureau of Standards (Boulder: National Bureau of Standards, 1986), pp. 461-465; Ross Bateman, A. G. McNish, and Pineo, "Radar Observations during Meteor Showers, 9 October 1946," Science 104 (1946): 434-435; Pineo, "Relation of Sporadic E Reflection and Meteoric Ionization," Science 110 (1949): 280-283; Pineo, "A Comparison of Meteor Activity with Occurrence of Sporadic-E Reflections," Science 112 (1950): 50-51; Pineo and T. N. Gautier, "The Wave-Frequency Dependence of the Duration of Radar-Type Echoes from Meteor Trails," Science 114 (1951): 460-462. Other articles by Pineo on his ionospheric research can be found in Laurence A. Manning, Bibliography of the Ionosphere: An Annotated Survey through 1960 (Stanford: Stanford University Press, 1962), pp. 421-423.
47. Gillmor, "Federal Funding and Knowledge Growth in Ionospheric Physics, 1945-1981," Social Studies of Science 16 (1986): 124.
48. Oswald G. Villard, Jr., "Listening in on the Stars," QST 30 (January, 1946): 59-60, 120 & 122; Helliwell, Whistlers and Related Ionospheric Phenomena (Stanford: Stanford University Press, 1965), pp. 11-23; Leslie, p. 58; Gillmor, "Federal Funding," p. 129.
49. Manning, Helliwell, Villard, and Evans, "On the Detection of Meteors by Radio," Physical Review 70 (1946): 767-768; Manning, "The Theory of the Radio Detection of Meteors," Journal of Applied Physics 19 (1948): 689-699: Manning, Villard, and Peterson, "Radio Doppler Investigation of Meteoric Heights and Velocities," Journal of Applied Physics 20 (1949): 475-479; Von R. Eshleman, "The Effect of Radar Wavelength on Meteor Echo Rate," Transactions of the Institute of Radio Engineers 1 (1953): 37-42. DeVorkin, pp. 287-288, points out that, when given an opportunity to make radio observations in coordination with rocket flights, Stanford declined.
50. Eshleman 9/5/94; Eshleman, "The Mechanism of Radio Reflections from Meteoric Ionization," Ph.D. diss., Stanford University, 1952; Eshleman, The Mechanism of Radio Reflections from Meteoric Ionization, Technical Report no. 49 (Stanford: Stanford Electronics Research Laboratory, 15 July 1952), pp. ii-iii & 3; Manning, "Meteoric Radio Echoes," Transactions of the Institute of Radio Engineers 2 (1954): 82-90; Manning and Eshleman, "Meteors in the Ionosphere," Proceedings of the Institute of Radio Engineers 47 (1959): 186-199.
51. Robert Desourdis, telephone conversation, 22 September 1994; Donald Spector, telephone conversation, 22 September 1994; Donald L. Schilling, ed., Meteor Burst Communications: Theory and Practice (New York: Wiley, 1993); Jacob Z. Schanker, Meteor Burst Communications (Boston: Artech House, 1990). For a civilian use of meteor burst communications, see Henry S. Santeford, Meteor Burst Communication System: Alaska Winter Field Test Program (Silver Spring, MD: U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Weather Service, Office of Hydrology, 1976.
52. Lovell 11/1/94; 7 & 8/55, "Accounts," JBA; Lovell, "Astronomer by Chance," typed manuscript, February 1988, p. 376, Lovell materials; Lovell, Jodrell Bank, p. 157; G. Nigel Gilbert, "The Development of Science and Scientific Knowledge: The Case of Radar Meteor Research," in Gerard Lemaine, Roy Macleod, Michael Mulkay, and Peter Weingart, eds., Perspectives on the Emergence of Scientific Disciplines (Chicago: Aldine, 1976), p. 191; Edge and Mulkay, pp. 330-331.
53. Lovell, Clegg, and Ellyett, "Radio Echoes from the Aurora Borealis," Nature 160 (1947): 372; Aspinall and Hawkins, "Radio Echo Reflections from the Aurora Borealis," Journal of the British Astronomical Association 60 (1950): 130-135; various materials in File Group "International Geophysical Year," Box 1, File 4, JBA; McKinley and Millman, "Long Duration Echoes from Aurora, Meteors, and Ionospheric Back-Scatter," Canadian Journal of Physics 31 (1953): 171-181.
54. Currie, Forsyth, and Vawter, "Radio Reflections from Aurora," Journal of Geophysical Research 58 (1953): 179-200.
55. Hellgren and Meos, "Localization of Aurorae with 10m High Power Radar Technique, using a Rotating Antenna," Tellus 3 (1952): 249-261; Harang and Landmark, "Radio Echoes Observed during Aurorae and Geomagnetic Storms using 35 and 74 Mc/s Waves Simultaneously," Journal of Atmospheric and Terrestrial Physics 4 (1954): 322-338; ibidem Nature 171 (1953): 1017-1018; Harang and J. Tröim, "Studies of Auroral Echoes," Planetary and Space Science 5 (1961): 33-45 & 105-108.
56. Jean Van Bladel, Les applications du radar à l'astronomie et à la météorologie (Paris: Gauthier-Villars, 1955), pp. 78-80; Neil Bone, The Aurora: Sun-Earth Interactions (New York: Ellis Horwood, 1991), pp. 36, 45-49; Alistair Vallance Jones, Aurora (Boston: D. Reidel Publishing Company, 1974), pp. 9, 11 & 27; Lovell, "Astronomer by Chance," manuscript, February 1988, p. 201, Lovell materials.
57. DeWitt and Stodola, p. 239.
58. Kerr, Shain, and Higgins, "Moon Echoes and Penetration of the Ionosphere," Nature 163 (1949): 310; Kerr and Shain, "Moon Echoes and Transmission through the Ionosphere," Proceedings of the IRE 39 (1951): 230; Kerr, "Early Days in Radio and Radar Astronomy in Australia," pp. 136-137 in Sullivan. Kerr and Shain, pp. 230-232, contains a better description of the system. See also Kerr, "Radio Superrefraction in the Coastal Regions of Australia," Australian Journal of Scientific Research, ser. A, vol. 1 (1948): 443-463.
59. D. D. Grieg, S. Metzger, and R. Waer, "Considerations of Moon-Relay Communication," Proceedings of the IRE 36 (1948): 660.
60. Kerr, Shain, and Higgins, p. 311.
61. Kerr and Shain, pp. 230-242.
62. Murray and Hargreaves, "Lunar Radio Echoes and the Faraday Effect in the Ionosphere," Nature 173 (1954): 944-945; Browne, Evans, Hargreaves, and Murray, p. 901; 1/17 "Correspondence Series 7," JBA; Lovell, "Astronomer by Chance," p. 183.
63. Evans 9/9/93; Hargreaves, "Radio Observations of the Lunar Surface," Proceedings of the Physical Society 73 (1959): 536-537; Evans, "Research on Moon Echo Phenomena," Technical (Final) Report, 1 May 1956, and earlier reports in 1/4 "Correspondence Series 2," JBA.
64. Evans 9/9/93; Evans, "The Scattering of Radio Waves by the Moon," Proceedings of the Physical Society B70 (1957): 1105-1112.
65. Evans 9/9/93; Edge and Mulkay, p. 298; Materials in 1/4 "Correspondence Series 2," and 2/53 "Accounts," JBA. With NASA funding, Jodrell Bank later participated in the Echo balloon project.
66. Harold Sobol, "Microwave Communications: An Historical Perspective," IEEE Transactions on Microwave Theory and Techniques MTT-32 (1984): 1170-1181.
67. Grieg, Metzger, and Waer, pp. 652-663; "Via the Moon: Relay Station to Transoceanic Communication," Newsweek 27 (11 February 1946): 64; Sulzer, Montgomery, and Gerks, "An U-H-F Moon Relay," Proceedings of the IRE 40 (1952): 361. A few years later, three amateur radio operators, "hams" who enjoyed detecting long-distance transmissions (DXing), succeeded in bouncing 144-Mhz radio waves off the moon, on 23 and 27 January 1953. E. P. T., "Lunar DX on 144 Mc!" QST 37 (1953): 11-12 & 116.
68. Gebhard, pp. 115-116; James H. Trexler, "Lunar Radio Echoes," Proceedings of the IRE 46 (1958): 286-288.
69. NRL, "The Space Science Division and E. O. Hulburt Center for Space Research, Program Review," 1968, NRLHRC; Yaplee, R. H. Bruton, K. J. Craig, and Nancy G. Roman, "Radar Echoes from the Moon at a Wavelength of 10 cm," Proceedings of the IRE 46 (1958): 293-297; Gebhard, p. 118. | http://history.nasa.gov/SP-4218/ch1.htm | 13 |
96 | |Edges and vertices||4|
In Euclidean geometry, a convex quadrilateral with at least one pair of parallel sides is referred to as a trapezoid in American English and as a trapezium in English outside North America. The parallel sides are called the bases of the trapezoid and the other two sides are called the legs or the lateral sides (if they are not parallel; otherwise there are two pairs of bases). A scalene trapezoid is a trapezoid with no sides of equal measure, in contrast to the special cases below. A trapezoid with vertices ABCD is denoted ABCD.
There is some disagreement whether parallelograms, which have two pairs of parallel sides, should be counted as trapezoids. Some define a trapezoid as a quadrilateral having exactly one pair of parallel sides (the exclusive definition), thereby excluding parallelograms. Others define a trapezoid as a quadrilateral with at least one pair of parallel sides (the inclusive definition), making the parallelogram a special type of trapezoid. The latter definition is consistent with its uses in higher mathematics such as calculus. The former definition would make such concepts as the trapezoidal approximation to a definite integral ill-defined. This article uses the inclusive definition and considers parallelograms as special cases of a trapezoid. This is also advocated in the taxonomy of quadrilaterals.
The term trapezium has been in use in English since 1570, from Late Latin trapezium, from Greek τραπέζιον (trapézion), literally "a little table", a diminutive of τράπεζα (trápeza), "a table", itself from τετράς (tetrás), "four" + πέζα (péza), "a foot, an edge". The first recorded use of the Greek word translated trapezoid (τραπέζοειδη, trapézoeide, "table-like") was by Marinus Proclus (412 to 485 AD) in his Commentary on the first book of Euclid's Elements.
This article uses the term trapezoid in the sense that is current in the United States and Canada. In all other languages using a word derived from the Greek for this figure, the form closest to trapezium (e.g. French trapèze, Italian trapezio, German Trapez, Russian трапеция) is used.
In an isosceles trapezoid, the legs (AD and BC in the figure above) have the same length, and the base angles have the same measure. In a right trapezoid (also called right-angled trapezoid), two adjacent angles are right angles. A tangential trapezoid is a trapezoid that has an incircle.
- A convex quadrilateral is a trapezoid if and only if it has two adjacent angles that are supplementary, that is, they add up 180 degrees.
- A convex quadrilateral is a trapezoid if and only if the diagonals cut each other in mutually the same ratio (this ratio is the same as that between the lengths of the parallel sides).
- A convex quadrilateral is a trapezoid if and only if the diagonals cut the quadrilateral into four triangles of which one opposite pair are similar.
- A convex quadrilateral is a trapezoid if and only if the diagonals cut the quadrilateral into four triangles of which one opposite pair have equal areas.:Prop.5
- A convex quadrilateral is a trapezoid if and only if the product of the areas of the two triangles formed by one diagonal equals the product of the areas of the two triangles formed by the other diagonal.:Thm.6
- A convex quadrilateral is a trapezoid if and only if the areas S and T of some two opposite triangles of the four triangles formed by the diagonals have the property that
- where K is the area of the quadrilateral.:Thm.8
- A convex quadrilateral with successive sides a, c, b, and d and diagonals p and q is a trapezoid with parallel sides a and b if and only if :Cor.11
- A convex quadrilateral is a trapezoid with parallel sides a and b if and only if the distance v between the midpoints of the diagonals is given by:Thm.12
- A convex quadrilateral is a trapezoid if and only if the midpoints of two sides and the intersection of the diagonals are collinear.:Thm.15
Midsegment and height
The midsegment (also called the median or midline) of a trapezoid is the segment that joins the midpoints of the legs. It is parallel to the bases. Its length m is equal to the average of the lengths of the bases a and b of the trapezoid,
The midsegment of a trapezoid is one of the two bimedians (the other bimedian divides the trapezoid into equal areas).
The height (or altitude) is the perpendicular distance between the bases. In the case that the two bases have different lengths (a ≠ b), the height of a trapezoid h can be determined by the length of its four sides using the formula
where c and d are the lengths of the legs. This formula also gives a way of determining when a trapezoid of consecutive sides a, c, b, and d exists. There is such a trapezoid with bases a and b if and only if
The area K of a trapezoid is given by
where a and b are the lengths of the parallel sides, and h is the height (the perpendicular distance between these sides.) In 499 AD Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, used this method in the Aryabhatiya (section 2.8). This yields as a special case the well-known formula for the area of a triangle, by considering a triangle as a degenerate trapezoid in which one of the parallel sides has shrunk to a point.
Therefore the area of a trapezoid is equal to the length of this midsegment multiplied by the height
From the formula for the height, it can be concluded that the area can be expressed in terms of the four sides as
When one of the parallel sides has shrunk to a point (say a = 0), this formula reduces to Heron's formula for the area of a triangle.
Another equivalent formula for the area, which more closely resembles Heron's formula, is
where is the semiperimeter of the trapezoid. (This formula is similar to Brahmagupta's formula, but it differs from it, in that a trapezoid might not be cyclic (inscribed in a circle). The formula is also a special case of Bretschneider's formula for a general quadrilateral).
From Bretschneider's formula, it follows that
The line that joins the midpoints of the parallel sides, bisects the area.
The lengths of the diagonals are
where a and b are the bases, c and d are the other two sides, and a < b.
If the trapezoid is divided into four triangles by its diagonals AC and BD (as shown on the right), intersecting at O, then the area of AOD is equal to that of BOC, and the product of the areas of AOD and BOC is equal to that of AOB and COD. The ratio of the areas of each pair of adjacent triangles is the same as that between the lengths of the parallel sides.
Let the trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and DC. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC:
The line that goes through both the intersection point of the extended nonparallel sides and the intersection point of the diagonals, bisects each base.
If the angle bisectors to angles A and B intersect at P, and the angle bisectors to angles C and D intersect at Q, then
More on terminology
The term trapezium is sometimes defined in the USA as a quadrilateral with no parallel sides, though this shape is more usually called an irregular quadrilateral. The term trapezoid was once defined as a quadrilateral without any parallel sides in Britain and elsewhere, but this does not reflect current usage. (The Oxford English Dictionary says "Often called by English writers in the 19th century".)
According to the Oxford English Dictionary, the sense of a figure with no sides parallel is the meaning for which Proclus introduced the term "trapezoid". This is retained in the French trapézoïde, German Trapezoid, and in other languages. A trapezium in Proclus' sense is a quadrilateral having one pair of its opposite sides parallel. This was the specific sense in England in 17th and 18th centuries, and again the prevalent one in recent use. A trapezium as any quadrilateral more general than a parallelogram is the sense of the term in Euclid. The sense of a trapezium as an irregular quadrilateral having no sides parallel was sometimes used in England from c. 1800 to c. 1875, but is now obsolete. This sense is the one that is sometimes quoted in the US, but in practice quadrilateral is used rather than trapezium.
In architecture the word is used to refer to symmetrical doors, windows, and buildings built wider at the base, tapering towards the top, in Egyptian style. If these have straight sides and sharp angular corners, their shapes are usually isosceles trapezoids.
- "American School definition from "math.com"". Retrieved 2008-04-14.
- Weisstein, Eric W., "Trapezoid", MathWorld.
- Trapezoids, , accessed 2012-02-24.
- Oxford English Dictionary entry at trapezoid.
- Martin Josefsson, "Characterizations of trapezoids", Forum Geometricorum, 13 (2013) 23-35.
- Quadrilateral Formulas, The Math Forum, Drexel University, 2012, .
- Aryabhatiya Marathi: आर्यभटीय, Mohan Apte, Pune, India, Rajhans Publications, 2009, p.66, ISBN 978-81-7434-480-9
- GoGeometry, , Accessed 2012-07-08.
- Owen Byer, Felix Lazebnik and Deirdre Smeltzer, Methods for Euclidean Geometry, Mathematical Association of America, 2010, p. 55.
- efunda, General Trapezoid, , Accessed 2012-07-09.
- Chambers 21st Century Dictionary Trapezoid
- "1913 American definition of trapezium". Merriam-Webster Online Dictionary. Retrieved 2007-12-10.
- Oxford English Dictionary entries for trapezoid and trapezium.
- Trapezoid definition Area of a trapezoid Median of a trapezoid With interactive animations
- Trapezoid (North America) at elsy.at: Animated course (construction, circumference, area)
- on Numerical Methods for Stem Undergraduate
- Autar Kaw and E. Eric Kalu, Numerical Methods with Applications, (2008) | http://en.wikipedia.org/wiki/Trapezoid | 13 |
73 | Expansion of Gases:
The thermal expansion of a gas involves 3 variables: volume, temperature, and pressure. Pressure of a gas, in a closed container, is the result of the collision of its molecules on the walls of that container. It is important to note that the kinetic energy of each gas molecule depends on its temperature only. Recall the definition of temperature: " the temperature of an object is a result of the vibrations of its atoms and molecules. For a gas, molecules are free to move and bounce repeatedly against each other and their container's walls. In each collision, a gas molecule transfers some momentum to its container's wall. Gas pressure is the result of such momentum transfers. The faster they move, the greater impulse per collision they impart to the container's walls causing a higher pressure. For a fixed volume, if the temperature of a gas increases (by heating), its pressure increases as well. This is simply because of increased kinetic energy of gas molecules that cause more number of collisions per second and therefore increased pressure.
One important formula to know is the formula for the average kinetic energy of a number of gas molecules that are at a given temperature.
The average K.E. of gas molecules is a function of temperature only. The formula is
(K.E.)avg. = (3/2) kT
where T is the absolute temperature in Kelvin scale and k is the " Boltzman's constant " with a value of k = 1.38x10-23 J /K.
By a number of gas molecules, we do not mean 1000 or even 1000,000 molecules. Most often we mean much more than 1024 molecules.
Note that kinetic energy on the other hand is K.E. = (1/2)MV2 , where V is the average speed of the gas molecules.
Since at a given temperature, the average kinetic energy of gas molecules is constant, a gas molecule that has a greater mass oscillates slower, and a gas molecule that has a smaller mass oscillate faster. The following example clarifies this concept.
Example 1: Calculate the average K.E. of air molecules at 27.0oC. Also, calculate the average speed of its constituents: oxygen molecules and nitrogen molecules. Note that O2 = 32.0 grams / mole, and N2 = 28.0 grams / mole.
Solution: K.E. = (3/2) k T ; K.E. = (3/2) (1.38x10-23 J/K)(27+273)K = 6.21x10-21 J/molecule.
This means that any gas molecule, on the average, has this energy.
For an oxygen molecule: K.E. = (1/2)MV2 ;
6.21x10-21 J/molecule = (1/2) ( 32.0x10-3 kg / 6.02x1023molecule)V2.
483m/s = V.
For a nitrogen molecule: K.E. = (1/2)MV2 ;
6.21x10-21 J/molecule = (1/2) ( 28.0x10-3 kg / 6.02x1023molecule)V2.
V = 517m/s.
Nitrogen is lighter; therefore, its average speed is higher. Oxygen is heavier than nitrogen; therefore, its average speed is lower than that of nitrogen at the same temperature. Note that in SI, grams must be converted to kilograms. Avogadro number (6.02x1023 molecules of) oxygen has a mass of 32.0 grams. 32.0grams means 32.0x10-3 kg.
Expansion of Gases: Perfect Gas Law:
If a gas fulfills two conditions, it is called a " perfect gas" or an " ideal gas " and its expansion follows the perfect gas law:
PV = nRT
where P is the gas absolute pressure (pressure with respect to vacuum), V is its volume (the volume of its container), n is the number of moles of gas in the container, R is the Universal gas constant, R = 8.314 [ J / (mole oK)], and T is the gas absolute temperature in Kelvin scale.
The two conditions for a gas to follow this equation are:
1) The gas pressure should not exceed about 10 atmospheres.
2) The gas must be superheated (gas temperature sufficiently above its boiling point) at the operating pressure and volume.
The Unit of " PV ":
Note that the product " PV " has dimensionally the unit of "energy." In SI, the unit of "P" is [ N / m2 ] and the unit of volume " V " is [ m3 ]. On this basis, the unit of the product " PV " becomes [ Nm ] or [ Joule ]. The " Joule " that appears in R = 8.314 J /(mole K) is for this reason.
Example 2: A 0.400m3 tank contains nitrogen at 27oC. The pressure gauge on it reads 3.75 atmosphere. Find (a) the number of moles of gas in the tank, and (b) the gas mass in kg.
Solution: PV = nRT ; n = (PV) / (RT) ; Use horiz. fraction bars when solving.
n = [(4.75x101000Pa)(0.400m3)] / [(8.314 J / (mole K))( 27 + 273)K].
(a) n = 76.9 moles.
(b) M = (76.9 moles)(28.0 grams /mole) = 2150 grams = 2.15 kg.
Note: Pabsolute = Pgauge + 1atmosphere = 4.75 atmosphere.
Tabsolute = oC + 273 or, K = oC + 273.
Example 3: A 0.770m3 hydrogen tank contains 0.446 kg of hydrogen at 127 oC. The pressure gage on it is not working. What pressure should the gauge show? Note that 6.02x1023 molecules of H2 amount to 2.00 grams.
Solution: n = (0.446x103 grams) / (2.00 grams / mole) = 223 moles.
PV = nRT ; P = (nRT) / V ; Use horiz. fraction bars when solving.
P = (223 moles)( 8.314 (J / mole K)) (127 + 273)K / ( 0.770 m3 ).
Pabs = 1,105,000 Pascals.
Pgauge = Pabs - 1 atm = 1,105,000 Pascals - 101,000Pascals = 1,004,000 Pa ( about 10 atmospheres).
Equation of State:
Equation PV = nRT is also called the "equation of state." The reason is that for a certain amount of a gas, i.e., a fixed mass, the number of moles is fixed. A change in any of the variables: P , V , or T, or any two of them, results in a change in one or the other two. Regardless of the changes, PV = nRT holds true for any state that the gas is in, as long as the two conditions of a perfect gas are maintained. That's why it is called the equation of state. A gas is considered to be ideal if its temperature is quite above its boiling point and its pressure is under 80 atmospheres. These two conditions must be met in any state that the gas is in, in order for this equation to be valid.
Now suppose that a fixed mass of a gas is in state 1: P1, V1, and T1. We can write P1V1 = nRT1. If the gas goes through a certain change and ends up in state 2: P2, V2, and T2, the equation of state for it becomes P2V2 = nRT2.
Dividing the 2nd equation by the 1st one, side by side, results in (P2V2 / P1V1) = ( nRT2 / nRT1 ). Simplifying yields:
(P2V2 / P1V1) = ( T2 / T1 ).
This equation simplifies the solution to many problems. Besides its general form shown above, it finds 3 other forms: one for constant pressure, one for constant temperature, and one for constant volume.
Example 4: 1632 grams of oxygen is at 2.80 atm. of gauge pressure and a temperature of 127oC. Find (a) its volume. It is then compressed to 6.60 atm. of gauge pressure while cooled down to 27oC. Find (b) its new volume.
Solution: n = (1632 / 32.0) moles = 51.0 moles ; (a) PV = nRT ; V = nRT/p ;
V = (51 moles)( 8.314 (J / mole K))(127+273)K / [3.80x101,000]Pa = 0.442m3.
(b) P2V2 / P1V1 = T2 / T1 ; (7.6atm)(V2) / [(3.8atm)(0.442m3)] = 300.K/400.K (Use horiz. frac. bars).
V2 = 0.166m3.
Constant Pressure (Isobar) Processes:
A process in which the pressure of an ideal gas does not change is called an " isobar process." Constant pressure means P2=P1. This simplifies the equation P2V2 / P1V1 = T2/ T1 to equation: V2 / V1 = T2 / T1.
Example 5: A piston-cylinder mechanism as shown below may be used to keep a constant pressure. The pressure on the gas under the piston is 0 gauge plus the extra pressure that the weight generates. Let the piston's radius be 10.0 cm and the weight 475N, and suppose that the position of the piston at 77oC is 25.0cm from the bottom of the cylinder. Find its position when the system is heated and the temperature is 127oC.
V1 = πr2h1 = π(10.0cm)2(25.0cm) = 2500π cm3.
V2 = πr2h2 = π(10.0cm)2( h2 ) = (100π)h2 cm3.
P1 = P2 = Constant. (It does not change, no need to calculate it).
T1 = 77 oC + 273oC = 350 K.
T2 = 127oC + 273oC = 400 K.
Using V2 / V1 = T2 / T1 results in:
( 100πh2 ) / ( 2500π ) = 400/ 350 , or
h2 = 28.6cm.
Constant Temperature (Isothermal) Processes:
A process in which the temperature of an ideal gas does not change is called an " isothermal process." Constant temperature means T2=T1. This simplifies the equation P2V2 / P1V1 = T2/ T1 to equation: P2V2 / P1V1 = 1. Cross-multiplication results in: P2V2 = P1V1.
Example 6: A piston cylinder system has an initial volume of 420 cm3 and the air in it is at a pressure of 3.00 atmospheres as its gauge shows. The gas is compressed to a volume of 140cm3 by pushing the piston. The generated heat is removed by enough cooling such that the temperature remains constant. Find the final pressure of the gas.
Solution: Since T = constant, therefore, P2V2 = P1V1 ; P2(140cm3) = (4.00atm)(420cm3) ;
(P2)absolute = 12.0 atm ; (P2)gauge = 11.0 atm.
Constant Volume (Isometric) Processes:
A process in which the volume of an ideal gas does not change is called an " isometric process." Constant volume means V2=V1. This simplifies the equation P2V2 / P1V1 = T2/ T1 to equation: P2 / P1 = T2 / T1. Gas cylinders have constant volumes.
Example 7: A 15.0 liter gas cylinder contains helium at 7oC and 11.0atm of gauge pressure. It is warmed up to 147oC. Find its new gauge pressure.
Solution: P2 / P1 = T2 / T1 ; P2 = P1 (T2 / T1 ) ; (P2)gauge = 17.0 atm.
Note: If your solution resulted in 16.5atm and you rounded it to 17 atmospheres, it is wrong. The answer is exactly 17.0 atm. without rounding.
Chapter 14 Test Yourself 1:
1) The temperature of a gas is the result of the (a) average K.E. of its atoms or molecules (b) average P.E. of its atoms and molecules (c) net momentum its atoms and molecules have. click here
2) The net momentum of gas molecules in a container is (a) equal to the net K.E. of the atoms or molecules (b) zero (c) neither a nor b.
3) The pressure of a gas in its container (a) is the result of the average momentum transfer of its molecules to the walls of the container (b) depends on the temperature of the gas that in turn depends on the average K.E. of the gas molecules (c) both a & b. click here
4) In a head-on collision of two equal and rigid masses, (a) the same-mass molecules exchange velocities (b) if one molecule is initially at rest, it will move at the velocity of the colliding molecule, and the colliding molecule comes to stop. (c) both a & b.
5) According to the "Kinetic Theory of Gases", the average K.E. of gas molecules is a function of (a) pressure only (b) volume only (c) temperature only. click here
6) The average K.E. of gas molecules in a container that is at temperature T is equal to (a) kT (b) 1/2 kT (c) 3/2 kT where K is the Boltzman's constant.
7) The Boltzman's constant (k) has a value of (a) 1.38x10-23 /K (b) 1.38x10-23 J (c) 1.38x10-23 J/ K. click here
8) The formula (K.E.)avg. = (3/2) kT is applicable to (a) all gases (b) monoatomic gases only (c) diatomic gases only.
9) The energy of a gas in a container due to its temperature is because of the energy it constantly receives from (a) the surroundings via heat transfer (b) the pressure from its container (c) the gravitational field of the Earth. click here
10) The ratio of the mass of an oxygen molecule to that of a hydrogen molecule is (a) 32 (b) 16 (c) 8.
11) The (K.E.)avg. of a gas molecule is on one hand 3/2kT and on the other hand is (a) Mv2 (b) 1/2 Mv2 (c) Mv.
12) At a given temperature, if the average speed of oxygen molecules is 480m/s, the average speed of hydrogen molecules is (a)120m/s (b)1920m/s (c)960m/s. click here
13) A gas is treated as a perfect gas if its pressure and temperature are respectively: (a) less than 80atm, under boiling point (b) more than 80atm, at boiling point (c) less then 80atm, above boiling point.
14) When the temperature of a gas is quite above its boiling point, the molecules are quite energetic and bounce around, and therefore do not stick together to condensate into liquid phase. This is a good reason for a gas to be a perfect gas and follow the perfect gas formula. (a) True (b) False. click here
15) When the pressure of a gas is under 80atm, the pressure is not too high to keep the molecules closer together to where the molecular attraction forces between the molecules become significant. Since no provision is made in the perfect gas law (PV=nRT) for molecular attraction; therefore, the formula is valid for pressures under 80 atmospheres. (a) True (b) False
16) In the formula PV = nRT, (a) P is absolute pressure only (b) T is absolute temperature only (c) both P and T are absolute quantities. click here
17) In the formula PV = nRT, the value of R is 8.314 J/(mole K) in (a) SI units only (b) all systems (c) American systems only.
18) The product PV has units if (a) force (b) energy (c) power.
19) In SI, since R = 8.314 J/(mole K), the product PV is in (a) lb-ft (b) Joules (c) watts. click here
20) The gauge on a gas tank shows the pressure as 2.5atm. The absolute pressure is (a) 1.5atm (b) 3.5atm (c) also 2.5atm.
21) The gas pressure in a tank is (340kPa)gauge. The absolute pressure is (a) 440kPa (b) 240kPa (c) also 140kPa.
22) The absolute pressure of a gas tank is 44.7psi. Its gauge should show (a) 59.4psi. (b) 43.7psi. (c) 30.0psi.
23) The Avogadro number is (a) 6.02x1023 molecules (b) 6.02x1023 molecules/mole (c) 6.02x1023 grams.
24) 6.02x1023 molecules of O2 have a mass of (a) 32.0grams (b) 16.0grams (c) 8.0grams. click here
25) 6.02x1023 molecules of N2 have a mass of (a) 34.0grams (b) 18.0grams (c) 28.0grams.
26) One mole of N2 has a mass of (a) 34.0grams (b) 18.0grams (c) 28.0grams.
27) One mole of O2 has a mass of (a) 32.0grams (b) 16.0grams (c) 8.0grams. click here
Problem: Let us select 1 mole of a perfect gas, any perfect gas, hydrogen, nitrogen, helium, etc., and put it in a container that can have a variable volume. Also, let us create STP (Standard Pressure and Temperature) for it, that is 1atm of absolute pressure and 0ºC. In Metric units, the standard pressure and temperature are: 101,000Pa and 273K. Answer the following:
28) Plugging these values: n = 1 mole, Pabs = 101,000Pa, T = 273K in the perfect gas formula PV = nRT , and solving for volume, yields: (a) V = 0.0224m3 (a) V = 22.4 Liter (c) both a & b. Note that: 1m3 = 1000 liter.
29) We may say that: one mole of any perfect gas at STP ( 1atm of absolute pressure, or 0 gauge pressure, and 0oC, or 273oK) occupies the same volume of 22.4 liter. (a) True (b) False click here
30) In an isothermal process, (a) T2 = T1 (b) P2V2 = P1V1 (c) both a & b.
31) In an isometric process, (a) P2 = P1 (b) V2 = V1 (c) P2V2 = P1V1.
32) In an isometric process, (a) V2 = V1 (b) P2 / P1 = T2 / T1 (c) both a & b.
33) In an isobar process, (a) P2 = P1 (b) V2 = V1 (c) P2V2 = P1V1.
34) In an isobar process, (a) P2 = P1. (b) V2 / V1 = T2 / T1. (c) both a & b. click here
35) At constant pressure, if the absolute temperature of a gas is doubled, its volume (a) doubles (b) triples (c) becomes half of what it was.
36) At constant volume, if the absolute temperature of a gas is doubled, its pressure (a) doubles (b) triples (c) becomes half of what it was. click here
37) At constant temperature, if the volume of a gas tripled by expansion, its pressure (a) doubles (b) triples (c) becomes 1/3 of what it was.
38) At constant temperature, if the pressure of a gas quadrupled by compression, its volume (a) doubles (b) triples (c) becomes 1/4 of what it was.
39) At constant temperature, when the pressure of a gas increases, its volume (a) increases (b) decreases (c) does not change. click here
40) The trick to create a constant pressure for a gas is to (a) place it in a tank with fixed boundaries (b) place it under a piston that can easily move up and down a cylinder without allowing the gas to escape (c) neither a nor b.
41) To keep a constant temperature for a gas (a) its volume must be kept constant regardless of its pressure (b) its pressure must be kept constant regardless of its volume (c) heat must be supplied to the gas or removed from it to keep it at the desired temperature.
42) Constant volume for a gas means keeping it (a) in a rigid closed cylinder (b) keep it under a piston-cylinder system such that the piston cannot move (c) both a & b. click here
43) A certain amount of gas in a closed rigid cylinder loses pressure if it is (a) cooled down (b) warmed up (c) both a & b.
44) If there is a certain amount of gas under a piston-cylinder system and the piston is pulled up such that the gas volume is increased, the gas temperature (a) goes up (b) goes down (c) remains unchanged (d) insufficient information. click here
1) Calculate (a) the mass (in kg) of each CO2 molecule as well as each He molecule knowing that their molar weights are 44.0 gr/mole and 4.0 gr/mole, respectively. Find (b) the average K.E. of each at 57.0oC. Find (c) the average speed of each at 57.0oC. The Avogadro Number is 6.02x1023 atoms/mole.
2) A 0.200m3 tank contains CO2 at 77oC. The pressure gauge on it reads 7.75 atmospheres. Find (a) the number of moles of gas in the tank, and (b) the mass in kg. Each mole of CO2 has a mass of 44.0grams. R = 8.314 J/(mole K).
3) A 555-liter tank contains 1.2 kg of helium at -73 oC. What pressure should the gauge on it show (a) in kPa and (b) in psi? Each mole of He is 4.00 grams. 1m3 = 1000 liter.
4) A tank contains 2.800 kg of nitrogen at a pressure of 3.80 atmosphere as its gauge shows. It is kept at a temperature of 27oC. Find (a) its volume in liters. It is then kept in another room for several hours that has a temperature of 77oC. Find (b) its new pressure. Neglect the very small change in the volume of the tank due to thermal expansion and treat the volume as a constant.
5) A piston cylinder system has an initial volume of 960 cm3 and the air in it is at zero gauge pressure. (a) If it is taken to outer space while keeping its volume and temperature constant, what pressure will its gauge show? Here on Earth, if the gas is compressed to a volume of 160cm3 by pushing the piston while keeping its temperature constant by cooling, find (b) its final gauge pressure.
6) A 36.0 liter metal cylinder contains nitrogen at 127oC and 15.0atm of gauge pressure. It is warmed up to 177oC. Find (a) its new gauge pressure. (b) how much gas does it contain?
1) 7.31x10-26kg , 6.64x10-27kg , 6.83x10-21J, 432m/s, 1430m/s
2) 60.7moles, 2.67kg 3) 798kPa, 116psi 4) 513 liters, 4.6 atm.
5) 1.0atm, 5.0atm 6) 17atm, 491grams | http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter14.htm | 13 |
59 | In geometry, an object such as a line or vector is called a normal to another object if they are perpendicular to each other. For example, in the two-dimensional case, the normal line to a curve at a given point is the line perpendicular to the tangent line to the curve at the point.
In the three-dimensional case a surface normal, or simply normal, to a surface at a point P is a vector that is perpendicular to the tangent plane to that surface at P. The word "normal" is also used as an adjective: a line normal to a plane, the normal component of a force, the normal vector, etc. The concept of normality generalizes to orthogonality.
The concept has been generalized to differentiable manifolds of arbitrary dimension embedded in a Euclidean space. The normal vector space or normal space of a manifold at a point P is the set of the vectors which are orthogonal to the tangent space at P. In the case of differential curves, the curvature vector is a normal vector of special interest.
The normal is often used in computer graphics to determine a surface's orientation toward a light source for flat shading, or the orientation of each of the corners (vertices) to mimic a curved surface with Phong shading.
Normal to surfaces in 3D space
Calculating a surface normal
For a plane given by the equation , the vector is a normal.
For a plane given by the equation
i.e., a is a point on the plane and b and c are (non-parallel) vectors lying on the plane, the normal to the plane is a vector normal to both b and c which can be found as the cross product .
For a hyperplane in n+1 dimensions, given by the equation
where a0 is a point on the hyperplane and ai for i = 1, ..., n are non-parallel vectors lying on the hyperplane, a normal to the hyperplane is any vector in the null space of A where A is given by
That is, any vector orthogonal to all in-plane vectors is by definition a surface normal.
since the gradient at any point is perpendicular to the level set, and (the surface) is a level set of .
For a surface S given explicitly as a function of the independent variables (e.g., ), its normal can be found in at least two equivalent ways. The first one is obtaining its implicit form , from which the normal follows readily as the gradient
(Notice that the implicit form could be defined alternatively as
these two forms correspond to the interpretation of the surface being oriented upwards or downwards, respectively, as a consequence of the difference in the sign of the partial derivative .) The second way of obtaining the normal follows directly from the gradient of the explicit form,
- , where is the upward unit vector.
If a surface does not have a tangent plane at a point, it does not have a normal at that point either. For example, a cone does not have a normal at its tip nor does it have a normal along the edge of its base. However, the normal to the cone is defined almost everywhere. In general, it is possible to define a normal almost everywhere for a surface that is Lipschitz continuous.
Uniqueness of the normal
A normal to a surface does not have a unique direction; the vector pointing in the opposite direction of a surface normal is also a surface normal. For a surface which is the topological boundary of a set in three dimensions, one can distinguish between the inward-pointing normal and outer-pointing normal, which can help define the normal in a unique way. For an oriented surface, the surface normal is usually determined by the right-hand rule. If the normal is constructed as the cross product of tangent vectors (as described in the text above), it is a pseudovector.
Transforming normals
When applying a transform to a surface it is sometimes convenient to derive normals for the resulting surface from the original normals. All points P on tangent plane are transformed to P′. We want to find n′ perpendicular to P. Let t be a vector on the tangent plane and Ml be the upper 3x3 matrix (translation part of transformation does not apply to normal or tangent vectors).
So use the inverse transpose of the linear transformation (the upper 3x3 matrix) when transforming surface normals. Also note that the inverse transpose is equal to the original matrix if the matrix is orthonormal, i.e. purely rotational with no scaling or shearing.
Hypersurfaces in n-dimensional space
The definition of a normal to a surface in three-dimensional space can be extended to -dimensional hypersurfaces in a -dimensional space. A hypersurface may be locally defined implicitly as the set of points satisfying an equation , where is a given scalar function. If is continuously differentiable then the hypersurface is a differentiable manifold in the neighbourhood of the points where the gradient is not null. At these points the normal vector space has dimension one and is generated by the gradient
The normal line at a point of the hypersurface is defined only if the gradient is not null. It is the line passing through the point and having the gradient as direction.
Varieties defined by implicit equations in n-dimensional space
A differential variety defined by implicit equations in the n-dimensional space is the set of the common zeros of a finite set of differential functions in n variables
The Jacobian matrix of the variety is the k×n matrix whose i-th row is the gradient of fi. By implicit function theorem, the variety is a manifold in the neighborhood of a point of it where the Jacobian matrix has rank k. At such a point P, the normal vector space is the vector space generated by the values at P of the gradient vectors of the fi.
In other words, a variety is defined as the intersection of k hypersurfaces, and the normal vector space at a point is the vector space generated by the normal vectors of the hypersurfaces at the point.
The normal (affine) space at a point P of the variety is the affine subspace passing through P and generated by the normal vector space at P.
These definitions may be extended verbatim to the points where the variety is not a manifold.
Let V be the variety defined in the 3-dimensional space by the equations
This variety is the union of the x-axis and the y-axis.
At a point (a, 0, 0) where a≠0, the rows of the Jacobian matrix are (0, 0, 1) and (0, a, 0). Thus the normal affine space is the plane of equation x=a. Similarly, if b≠0, the normal plane at (0, b, 0) is the plane of equation y=b.
At the point (0, 0, 0) the rows of the Jacobian matrix are (0, 0, 1) and (0,0,0). Thus the normal vector space and the normal affine space have dimension 1 and the normal affine space is the z-axis.
- Surface normals are essential in defining surface integrals of vector fields.
- Surface normals are commonly used in 3D computer graphics for lighting calculations; see Lambert's cosine law.
- Surface normals are often adjusted in 3D computer graphics by normal mapping.
- Render layers containing surface normal information may be used in Digital compositing to change the apparent lighting of rendered elements.
Normal in geometric optics
The normal is the line perpendicular to the surface of an optical medium. In reflection of light, the angle of incidence and the angle of reflection are respectively the angle between the normal and the incident ray and the angle between the normal and the reflected ray.
See also
- "The Law of Reflection". The Physics Classroom Tutorial. Retrieved 2008-03-31. | http://en.wikipedia.org/wiki/Surface_normal | 13 |
53 | Thanks for visiting the U.S. number format version of the decimals and percents worksheets page at Math-Drills.Com where we make a POINT of helping students learn. On this page, you will find a variety of worksheets that will help students reinforce the skills they are learning related to decimals. To start, you will find the general use printables below to be helpful in teaching the concepts of decimals and place value. More information on them is included just under the sub-title.
Further down the page, rounding, comparing and ordering decimals worksheets allow students to gain more comfort with decimals before they move on to performing operations with decimals. There are many operations with decimals worksheets throughout the page. It would be a really good idea for students to have a strong knowledge of addition, subtraction, multiplication and division before attempting these questions. At the end of the page, you will find decimal numbers used in order of operations questions.
General Use Printables
The thousandths grid is a useful tool in representing operations with decimals. Each small rectangle represents a thousandth. Each square represents a hundredth. Each row or column represents a tenth. The entire grid represents one whole. The hundredths grid can be used to model percents or decimals. The decimal place value chart is a tool used with students who are first learning place value related to decimals or for those students who have difficulty with place value when working with decimals.
Expanded Form with Decimals
For students who have difficulty with expanded form, try familiarizing them with the decimal place value chart first, and let them use it to help them write numbers in expanded form. There are many ways to write numbers in expanded form. 1.23 could be written as 1 + 0.2 + 0.03 OR 1 + 2/10 + 3/100 OR 1 × 100 + 2 × 10-1 + 3 × 10-2 OR any of the previous two written with parentheses/brackets OR 1 + 2 × 1/10 + 3 × 1/100 with or without parentheses, etc. Despite what the answer key shows, please teach any or all of the ways depending on your students' learning needs.
Rounding Decimals Worksheets
The convention on the decimals worksheets below is to round up on a five. Apply rounding knowledge to estimate answers on the operational worksheets that follow.
|Round Hundredths to a Whole Number||A B C D E F G H I J|
|Round Hundredths to a Tenth||A B C D E F G H I J|
|Round Thousandths to a Tenth||A B C D E F G H I J|
|Round Thousandths to a Hundredth||A B C D E F G H I J|
|Round Ten Thousandths to a Hundredth||A B C D E F G H I J|
Comparing Decimals Worksheets
Use the decimals worksheets below to help students recognize ordinality in decimal numbers.
Sorting/Ordering Decimals Worksheets
The decimals worksheets below help students compare numbers further by ordering lists of decimal numbers.
|Ordering Decimal Hundredths||A B C D E F G H I J|
|Ordering Decimal Thousandths||A B C D E F G H I J|
Percents are a special kind of decimal. Once the connection is seen between the two (a percent is just 100 times a decimal), they are not so mysterious. Percents have a few specific applications that we've highlighted in the worksheets below.
|Finding Percents of a Number||A B C D E F G H I J|
|Finding Percents of a Large Number (no decimals)||A B C D E F G H I J|
|Finding Percents of a Large Number||A B C D E F G H I J|
|What is the Percent?||A B C D E F G H I J|
|Comparing Percents of Numbers||A B C D E F G H I J|
Adding Decimals Worksheets
Try the following mental addition strategy for decimals. Begin by ignoring the decimals in the addition question. Add the numbers as if they were whole numbers. For example, 3.25 + 4.98 could be viewed as 325 + 498 = 823. Use an estimate to decide where to place the decimal. In the example, 3.25 + 4.98 is approximately 3 + 5 = 8, so the decimal in the sum must go between the 8 and the 2 (i.e. 8.23)
|Decimal Addition (range 0.1 to 0.9)||A B C D E F G H I J All|
|Decimal Addition (range 1.1 to 9.9)||A B C D E F G H I J All|
|Decimal Addition (range 10.1 to 99.9)||A B C D E F G H I J All|
|Decimal Addition (range 0.01 to 0.99)||A B C D E F G H I J All|
|Decimal Addition (range 1.01 to 9.99)||A B C D E F G H I J All|
|Decimal Addition (range 10.01 to 99.99)||A B C D E F G H I J All|
|Add Decimal Tenths||A B C D E F G H I J|
|Add Decimal Hundredths||A B C D E F G H I J|
|Add Decimal Thousandths||A B C D E F G H I J|
|Add Decimal Ten Thousandths||A B C D E F G H I J|
Subtracting Decimals Worksheets
Have you thought about using base ten blocks for decimal subtraction? Just redefine the blocks, so the big block is a one, the flat is a tenth, the rod is a hundredth and the little cube is a thousandth. Model and subtract decimals using base ten blocks, so students can "see" how decimals really work.
Adding and Subtracting Decimals Worksheets
Adding and subtracting decimals is faily straightforward when all the decimals are lined up. Use these worksheets to ensure students understand where the decimal is placed when adding numbers. A wonderful strategy for placing the decimal is to use estimation. For example if the question is 49.2 + 20.1, the answer without the decimal is 693. Estimate by rounding 49.2 to 50 and 20.1 to 20. 50 + 20 = 70. The decimal in 693 must be placed between the 9 and the 3 as in 69.3 to make the number close to the estimate of 70.
The above strategy will go a long way in students understanding operations with decimals, but it is also important that they have a strong foundation in place value and a proficiency with efficient strategies or algorithms to be completely successful with these questions. As with any math skill, it is not wise to present this to students until they have the necessary prerequisite skills and knowledge.
|Add and Subtract Decimal Tenths||A B C D E F G H I J|
|Add and Subtract Decimal Hundredths||A B C D E F G H I J|
|Add and Subtract Decimal Thousandths||A B C D E F G H I J|
|Add and Subtract Decimal Ten Thousandths||A B C D E F G H I J|
Multiplying Decimals Worksheets
Dividing Decimals Worksheets
Converting Decimals Worksheets
|Converting Fractions to Decimals||A B C D E F G H I J|
|Converting Decimals to Fractions||A B C D E F G H I J|
|Converting Fractions to Hundredths||A B C D E F G H I J|
|Converting Between Fractions, Decimals, Percents and Ratios||A B C D E F G H I J All|
Order of Operations with Decimals Worksheets
Order of Operations with Fractions & Decimals
|Fractions and Decimals Mixed||A B C D E|
|Fractions and Decimals Mixed w/ Negatives||A| | http://www.math-drills.com/decimal.shtml | 13 |
50 | How to Find the Area between Two Curves
To find the area between two curves, you need to come up with an expression for a narrow rectangle that sits on one curve and goes up to another.
from x = 0 to x = 1:
To get the height of the representative rectangle in the figure, subtract the y-coordinate of its bottom from the y-coordinate of its top — that’s
Its base is the infinitesimal dx. So, because area equals height times base,
Now you just add up the areas of all the rectangles from 0 to 1 by integrating:
Now to make things a little more twisted, in the next problem the curves cross (see the following figure). When this happens, you have to divide the total shaded area into separate regions before integrating. Try this one:
from x = 0 to x = 2.
Determine where the curves cross.
They cross at (1, 1) — what an amazing coincidence! So you’ve got two separate regions — one from 0 to 1 and another from 1 to 2.
Figure the area of the region on the left.
Figure the area of the region on the right.
Add up the areas of the two regions to get the total area.
≈ 3.11 square units
Note that the height of a representative rectangle is always its top minus its bottom, regardless of whether these numbers are positive or negative. For instance, a rectangle that goes from 20 up to 30 has a height of 30 – 20, or 10; a rectangle that goes from –3 up to 8 has a height of 8 – (–3), or 11; and a rectangle that goes from –15 up to –10 has a height of –10 – (–15), or 5.
If you think about this top-minus-bottom method for figuring the height of a rectangle, you can now see — assuming you didn’t already see it — why the definite integral of a function counts area below the x-axis as negative. For example, consider the following figure.
What’s the shaded area? Hint: it’s not
If you want the total area of the shaded region shown in the figure, you have to divide the shaded region into two separate pieces like you did in the last problem.
For the first piece, from 0 to pi, a representative rectangle has a height equal to the function itself, y = sin (x), because its top is on the function and its bottom is at zero — and of course, anything minus zero is itself. So the area of this first piece is given by the ordinary definite integral
the top of a representative rectangle is at zero — recall that the x-axis is the line y = 0 — and its bottom is on y = sin(x), so its height (given, of course, by top minus bottom) is 0 – sin(x), or just –sin(x). So, to get the area of this second piece, you figure the definite integral of the negative of the function,
Because this negative integral gives you the ordinary, positive area of the piece below the x-axis, the positive definite integral
gives a negative area. That’s why if you figure the definite integral
over the entire span, the piece below the x-axis counts as a negative area, and the answer gives you the net of the area above the x-axis minus the area below the axis — rather than the total shaded area. Clear as mud? | http://www.dummies.com/how-to/content/how-to-find-the-area-between-two-curves.navId-403863.html | 13 |
95 | The radian is the standard unit of angular measure, used in many areas of mathematics. An angle's measurement in radians is numerically equal to the length of a corresponding arc of a unit circle, so one radian is just under 57.3 degrees (when the arc length is equal to the radius). The unit was formerly an SI supplementary unit, but this category was abolished in 1995 and the radian is now considered an SI derived unit. The SI unit of solid angle measurement is the steradian.
The radian is represented by the symbol "rad" or, more rarely, by the superscript c (for "circular measure"). For example, an angle of 1.2 radians would be written as "1.2 rad" or "1.2c" (the latter symbol is often mistaken for a degree: "1.2°").
Radian describes the plane angle subtended by a circular arc as the length of the arc divided by the radius of the arc. One radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. More generally, the magnitude in radians of such a subtended angle is equal to the ratio of the arc length to the radius of the circle; that is, θ = s /r, where θ is the subtended angle in radians, s is arc length, and r is radius. Conversely, the length of the enclosed arc is equal to the radius multiplied by the magnitude of the angle in radians; that is, s = rθ. As the ratio of two lengths, the radian is a "pure number" that needs no unit symbol, and in mathematical writing the symbol "rad" is almost always omitted. In the absence of any symbol radians are assumed, and when degrees are meant the symbol ° is used.
It follows that the magnitude in radians of one complete revolution (360 degrees) is the length of the entire circumference divided by the radius, or 2πr /r, or 2π. Thus 2π radians is equal to 360 degrees, meaning that one radian is equal to 180/π degrees.
The concept of radian measure, as opposed to the degree of an angle, is normally credited to Roger Cotes in 1714. He had the radian in everything but name, and he recognized its naturalness as a unit of angular measure. The idea of measuring angles by the length of the arc was used already by other mathematicians. For example al-Kashi (c. 1400) used so-called diameter parts as units where one diameter part was 1/60 radian and they also used sexagesimal subunits of the diameter part.
The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson (brother of Lord Kelvin) at Queen's College, Belfast. He used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, vacillated between rad, radial and radian. In 1874, Muir adopted radian after a consultation with James Thomson.
Conversion between radians and degrees
As stated, one radian is equal to 180/π degrees. Thus, to convert from radians to degrees, multiply by 180/π.
Conversely, to convert from degrees to radians, multiply by π/180.
Radians can be converted to turns by dividing the number of radians by 2π.
Radian to degree conversion derivation
We know that the length of circumference of a circle is given by , where is the radius of the circle.
So, we can very well say that the following equivalent relation is true:
[Since a sweep is need to draw a full circle]
By definition of radian, we can formulate that a full circle represents:
Combining both the above relations we can say:
Conversion between radians and grads
The table shows the conversion of some common angles.
Advantages of measuring in radians
In calculus and most other branches of mathematics beyond practical geometry, angles are universally measured in radians. This is because radians have a mathematical "naturalness" that leads to a more elegant formulation of a number of important results.
Most notably, results in analysis involving trigonometric functions are simple and elegant when the functions' arguments are expressed in radians. For example, the use of radians leads to the simple limit formula
which is the basis of many other identities in mathematics, including
Because of these and other properties, the trigonometric functions appear in solutions to mathematical problems that are not obviously related to the functions' geometrical meanings (for example, the solutions to the differential equation , the evaluation of the integral , and so on). In all such cases it is found that the arguments to the functions are most naturally written in the form that corresponds, in geometrical contexts, to the radian measurement of angles.
The trigonometric functions also have simple and elegant series expansions when radians are used; for example, the following Taylor series for sin x :
If x were expressed in degrees then the series would contain messy factors involving powers of π/180: if x is the number of degrees, the number of radians is y = πx /180, so
Mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler's formula) are, again, elegant when the functions' arguments are in radians and messy otherwise.
Although the radian is a unit of measure, it is a dimensionless quantity. This can be seen from the definition given earlier: the angle subtended at the centre of a circle, measured in radians, is equal to the ratio of the length of the enclosed arc to the length of the circle's radius. Since the units of measurement cancel, this ratio is dimensionless.
Although polar and spherical coordinates use radians to describe coordinates in two and three dimensions, the unit is derived from the radius coordinate, so the angle measure is still dimensionless.
Use in physics
The radian is widely used in physics when angular measurements are required. For example, angular velocity is typically measured in radians per second (rad/s). One revolution per second is equal to 2π radians per second.
Similarly, angular acceleration is often measured in radians per second per second (rad/s2).
For the purpose of dimensional analysis, the units are s−1 and s−2 respectively.
Likewise, the phase difference of two waves can also be measured in radians. For example, if the phase difference of two waves is (k·2π) radians, where k is an integer, they are considered in phase, whilst if the phase difference of two waves is (k·2π + π), where k is an integer, they are considered in antiphase.
Multiples of radian units
Metric prefixes have limited use with radians, and none in mathematics.
There are 2π × 1000 milliradians (≈ 6283.185 mrad) in a circle. So a trigonometric milliradian is just under 1⁄6283 of a circle. This “real” trigonometric unit of angular measurement of a circle is in use by telescopic sight manufacturers using (stadiametric) rangefinding in reticles. The divergence of laser beams is also usually measured in milliradians.
An approximation of the trigonometric milliradian (0.001 rad), known as the (angular) mil, is used by NATO and other military organizations in gunnery and targeting. Each angular mil represents 1⁄6400 of a circle and is 1-⅞% smaller than the trigonometric milliradian. For the small angles typically found in targeting work, the convenience of using the number 6400 in calculation outweighs the small mathematical errors it introduces. In the past, other gunnery systems have used different approximations to 1⁄2000π; for example Sweden used the 1⁄6300 streck and the USSR used 1⁄6000. Being based on the milliradian, the NATO mil subtends roughly 1 m at a range of 1000 m (at such small angles, the curvature is negligible).
Smaller units like microradians (μrads) and nanoradians (nrads) are used in astronomy, and can also be used to measure the beam quality of lasers with ultra-low divergence. Similarly, the prefixes smaller than milli- are potentially useful in measuring extremely small angles.
- Angular mil - military measurement
- Harmonic analysis
- Angular frequency
- Steradian - the "square radian"
- O'Connor, J. J.; Robertson, E. F. (February 2005). "Biography of Roger Cotes". The MacTutor History of Mathematics.
- Luckey, Paul (1953) [Translation of 1424 book]. In Siggel, A. Der Lehrbrief über den kreisumfang von Gamshid b. Mas'ud al-Kasi [Treatise on the Circumference of al-Kashi] (6). Berlin: Akademie Verlag. p. 40.
- Cajori, Florian (1929). History of Mathematical Notations 2. pp. 147–148. ISBN 0-486-67766-4.
- Muir, Thos. (1910). "The Term "Radian" in Trigonometry". Nature 83 (2110): 156. Bibcode:1910Natur..83..156M. doi:10.1038/083156a0.Thomson, James (1910). "The Term "Radian" in Trigonometry". Nature 83 (2112): 217. Bibcode:1910Natur..83..217T. doi:10.1038/083217c0.Muir, Thos. (1910). "The Term "Radian" in Trigonometry". Nature 83 (2120): 459–460. Bibcode:1910Natur..83..459M. doi:10.1038/083459d0.
- Miller, Jeff (Nov. 23, 2009). "Earliest Known Uses of Some of the Words of Mathematics". Retrieved Sep. 30, 2011.
- For a debate on this meaning and use see: Brownstein, K. R. (1997). "Angles—Let's treat them squarely". American Journal of Physics 65 (7): 605. Bibcode:1997AmJPh..65..605B. doi:10.1119/1.18616., Romain, J.E. (1962). "Angles as a fourth fundamental quantity". Journal of Research of the National Bureau of Standards-B. Mathematics and Mathematical Physics 66B (3): 97., LéVy-Leblond, Jean-Marc (1998). "Dimensional angles and universal constants". American Journal of Physics 66 (9): 814. Bibcode:1998AmJPh..66..814L. doi:10.1119/1.18964., and Romer, Robert H. (1999). "Units—SI-Only, or Multicultural Diversity?". American Journal of Physics 67: 13. Bibcode:1999AmJPh..67...13R. doi:10.1119/1.19185.
|Wikibooks has a book on the topic of: Trigonometry/Radian and degree measures|
|Look up radian in Wiktionary, the free dictionary.| | http://en.wikipedia.org/wiki/Radian | 13 |
52 | From Wikipedia, the free encyclopedia
|30.857×1012 km||30.857×1015 m|
|206.26×103 AU||3.26156 ly|
|US customary / Imperial units|
|19.174×1012 mi||101.24×1015 ft|
The name parsec is "an abbreviated form of 'a distance corresponding to a parallax of one arcsecond'." It was coined in 1913 at the suggestion of British astronomer Herbert Hall Turner. A parsec is the distance from the Sun to an astronomical object which has a parallax angle of one arcsecond (1⁄3,600 of a degree). In other words, imagine that a straight line is drawn from the object to the Earth, a second line is drawn from the Earth to the Sun, and a third line is drawn from the object to the Sun that is perpendicular to the line drawn from the Earth to the Sun. Now, if the angle formed between the line drawn from the object to the Earth and the line drawn from the object to the Sun is exactly one arcsecond, then the object's distance from the Sun would be exactly one parsec.
The angle can be measured by observing the object's precise location in the sky. It becomes quite convenient for astronomers to determine the object's distance by simply measuring its apparent motion in the sky over a 6-month interval. After six months, the Earth is at the opposite end of its orbit around the Sun, a distance equal to twice the base of the right triangle described above. In this case, the object's distance in parsecs is numerically (though not dimensionally) equal to the reciprocal of half the number of arcseconds by which its position appears to change ( ). The less an object appears to have moved, the farther it is from the Sun, and vice versa.
Equivalencies in other units
1 parsec ≡ 648000 / π astronomical units
History and derivation
The parsec is equal to the length of the adjacent side of an imaginary right triangle in space. The two dimensions on which this triangle is based are the angle (which is defined as 1 arcsecond), and the opposite side (which is defined as 1 astronomical unit, which is the distance from the Earth to the Sun). Using these two measurements, along with the rules of trigonometry, the length of the adjacent side (the parsec) can be found.
One of the oldest methods for astronomers to calculate the distance to a star was to record the difference in angle between two measurements of the position of the star in the sky. The first measurement was taken from the Earth on one side of the Sun, and the second was taken half a year later when the Earth was on the opposite side of the Sun. The distance between the two positions of the Earth for the measurements was known to be twice the distance between the Earth and the Sun. The difference in angle between the two measurements was known to be twice the parallax angle, which is formed by lines from the Sun and Earth to the star at the vertex. Then the distance to the star could be calculated using trigonometry. The first successful direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the three and a half parsec distance of 61 Cygni.
The parallax of a star is taken to be half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semi-major axis of Earth's orbit. The star, the sun and the earth form the corners of an imaginary right triangle in space: the right angle is the corner at the sun, and the corner at the star is the parallax angle. The length of the opposite side to the parallax angle is the distance from the Earth to the Sun (defined as 1 astronomical unit (AU)), and the length of the adjacent side gives the distance from the sun to the star. Therefore, given a measurement of the parallax angle, along with the rules of trigonometry, the distance from the sun to the star can be found. A parsec is defined as the length of the adjacent side of this right triangle in space when the parallax angle is 1 arcsecond.
The use of the parsec as a unit of distance follows naturally from Bessel's method, since distance in parsecs can be computed simply as the reciprocal of the parallax angle in arcseconds (i.e. if the parallax angle is 1 arcsecond, the object is 1 pc distant from the sun; If the parallax angle is 0.5 arcsecond, the object is 2 pc distant; etc.). No trigonometric functions are required in this relationship because the very small angles involved mean that the approximate solution of the skinny triangle can be applied.
Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance. He proposed the name astron, but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec. It was Turner's proposal that stuck.
Calculating the value of a parsec
In the diagram above (not to scale), S represents the Sun, and E the Earth at one point in its orbit. Thus the distance ES is one astronomical unit (AU). The angle SDE is one arcsecond (1/3600 of a degree) so by definition D is a point in space at a distance of one parsec from the Sun. By trigonometry, the distance SD is
Using the small-angle approximation, by which the sine (and, hence, the tangent) of an extremely small angle is essentially equal to the angle itself (in radians),
One AU ≈ 149597870700 metres, so 1 parsec ≈ 3.085678×1016 m ≈ 3.261564 ly.
A corollary is that 1 parsec is also the distance from which a disc with a diameter of 1 AU must be viewed for it to have an angular diameter of 1 arcsecond (by placing the observer at D and a diameter of the disc on ES).
Usage and measurement
The parallax method is the fundamental calibration step for distance determination in astrophysics; however, the accuracy of ground-based telescope measurements of parallax angle is limited to about 0.01 arcseconds, and thus to stars no more than 100 pc distant. This is because the Earth’s atmosphere limits the sharpness of a star's image. Space-based telescopes are not limited by this effect and can accurately measure distances to objects beyond the limit of ground-based observations. Between 1989 and 1993, the Hipparcos satellite, launched by the European Space Agency (ESA), measured parallaxes for about 100,000 stars with an astrometric precision of about 0.97 milliarcseconds, and obtained accurate measurements for stellar distances of stars up to 1,000 pc away. NASA's FAME satellite was to have been launched in 2004, to measure parallaxes for about 40 million stars with sufficient precision to measure stellar distances of up to 2,000 pc. However, the mission's funding was withdrawn by NASA in January 2002. ESA's Gaia satellite, which was due to be launched in late 2012, but has been pushed to August 2013, is intended to measure one billion stellar distances to within 20 microarcseconds, producing errors of 10% in measurements as far as the Galactic Center, about 8,000 pc away in the constellation of Sagittarius.
Distances in parsecs
Distances less than a parsec
Distances measured in fractions of a parsec usually involve objects within a single star system. So, for example:
- One astronomical unit (AU), the distance from the Sun to the Earth, is just under 0.000005 parsecs (150,000,000 km; 96,000,000 mi).
- The most distant space probe, Voyager 1, was 0.0006 parsecs (0.002 ly) from Earth as of May 2013[update]. It took Voyager 35 years to cover that distance.
- The Oort cloud is estimated to be approximately 0.6 parsecs (2.0 ly) in diameter
Parsecs and kiloparsecs
Distances measured in parsecs (pc) include distances between nearby stars, such as those in the same spiral arm or globular cluster. A distance of 1,000 parsecs (3,300 ly) is commonly denoted by the kiloparsec (kpc). Astronomers typically use kiloparsecs to measure distances between parts of a galaxy, or within groups of galaxies. So, for example:
- One parsec is approximately 3.26 lightyears.
- The nearest known star to the Earth, other than the Sun, Proxima Centauri, is 1.3 parsecs (4.24 ly) away.
- The distance to the open cluster Pleiades is 130 ± 10 pc (420 ± 33 ly).
- The center of the Milky Way is more than 8 kiloparsecs (26,000 ly) from the Earth, and the Milky Way is roughly 34 kpc (110,000 ly) across.
- The Andromeda Galaxy (M31) is ~780 kpc (2,500,000 ly) away from the Earth.
Megaparsecs and gigaparsecs
A distance of one million parsecs (3.3 million light-years or 3.3 Mly) is commonly denoted by the megaparsec (Mpc). Astronomers typically measure the distances between neighbouring galaxies and galaxy clusters in megaparsecs.
Galactic distances are sometimes given in units of Mpc/h (as in "50/h Mpc"). h is a parameter in the range [0.5,0.75] reflecting the uncertainty in the value of the Hubble constant H for the rate of expansion of the universe: h = H / (100 km/s/Mpc). The Hubble constant becomes relevant when converting an observed redshift z into a distance d using the formula d ≈ (c / H) × z.
One gigaparsec (Gpc) is one billion parsecs — one of the largest distance measures commonly used. One gigaparsec is about 3.3 billion light-years (3.3 Gly), or roughly one fourteenth of the distance to the horizon of the observable universe (dictated by the cosmic background radiation). Astronomers typically use gigaparsecs to measure large-scale structures such as the size of, and distance to, the CfA2 Great Wall; the distances between galaxy clusters; and the distance to quasars.
- The Andromeda Galaxy is about 0.78 Mpc (2.5 Mly) from the Earth.
- The nearest large galaxy cluster, the Virgo Cluster, is about 16.5 Mpc (54 Mly) from the Earth.
- The galaxy RXJ1242-11, observed to have a supermassive black hole core similar to the Milky Way's, is about 200 Mpc (650 Mly) from the Earth.
- The particle horizon (the boundary of the observable universe) has a radius of about 14.0 Gpc (46 Gly).
Volume units
To determine the number of stars in the Milky Way Galaxy, volumes in cubic kiloparsecs[a] (kpc3) are selected in various directions. All the stars in these volumes are counted and the total number of stars statistically determined. The number of globular clusters, dust clouds and interstellar gas is determined in a similar fashion. To determine the number of galaxies in superclusters, volumes in cubic megaparsecs[a] (Mpc3) are selected. All the galaxies in these volumes are classified and tallied. The total number of galaxies can then be determined statistically. The huge void in Boötes is measured in cubic megaparsecs. In cosmology, volumes of cubic gigaparsecs[a] (Gpc3) are selected to determine the distribution of matter in the visible universe and to determine the number of galaxies and quasars. The Sun is alone in its cubic parsec,[a] (pc3) but in globular clusters the stellar density per cubic parsec could be from 100 to 1,000.
See also
1 pc3 ≈ 2.938×1049 m3 1 kpc3 ≈ 2.938×1058 m3 1 Mpc3 ≈ 2.938×1067 m3 1 Gpc3 ≈ 2.938×1076 m3
- Dyson, F. W., Stars, Distribution and drift of, The distribution in space of the stars in Carrington's Circumpolar Catalogue. In: Monthly Notices of the Royal Astronomical Society, Vol. 73, p. 334–342. March 1913.
"There is a need for a name for this unit of distance. Mr. Charlier has suggested Siriometer ... Professor Turner suggests PARSEC, which may be taken as an abbreviated form of 'a distance corresponding to a parallax of one second'."
- Introduction to the Night Sky - Part III
- High Energy Astrophysics Science Archive Research Center (HEASARC). "Deriving the Parallax Formula". NASA's Imagine the Universe!. Astrophysics Science Division (ASD) at NASA's Goddard Space Flight Center. Retrieved 2011-11-26.
- Bessel, FW, "Bestimmung der Entfernung des 61sten Sterns des Schwans" (1838) Astronomische Nachrichten, vol. 16, pp. 65–96.
- Dyson, F. W., "The distribution in space of the stars in Carrington's Circumpolar Catalogue" (1913) Monthly Notices of the Royal Astronomical Society, vol. 73, pp. 334–42, p. 342 fn..
- Richard Pogge, Astronomy 162, Ohio State.
- jrank.org, Parallax Measurements
- "The Hipparcos Space Astrometry Mission". Retrieved August 28, 2007.
- Catherine Turon, From Hipparchus to Hipparcos
- FAME news, 25 January 2002.
- GAIA from ESA.
- "Galaxy structures: the large scale structure of the nearby universe". Retrieved May 22, 2007.
- Mei, S. et al 2007, ApJ, 655, 144
- "Misconceptions about the Big Bang". Retrieved January 8, 2010.
- Astrophysical Journal, Harvard
- Guidry, Michael. "Astronomical Distance Scales". Astronomy 162: Stars, Galaxies, and Cosmology. University of Tennessee, Knoxville. Retrieved 2010-03-26.
- Merrifield, Michael. "pc Parsec". Sixty Symbols. Brady Haran for the University of Nottingham. | http://wpedia.goo.ne.jp/enwiki/Parsecs | 13 |
75 | Mass flow rate
In physics and engineering, mass flow rate is the mass of a substance which passes through a given surface per unit of time. Its unit is kilogram per second in SI units, and slug per second or pound per second in US customary units. The common symbol is (pronounced "m-dot"), although sometimes μ (Greek lowercase mu) is used.
i.e. the flow of mass m through a surface per unit time t.
The overdot on the m is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not simply the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow.
Alternative equations
Mass flow rate can also be calculated by:
- ρ = mass density of the fluid,
- v = velocity field of the mass elements flowing,
- A = cross-sectional vector area/surface,
- Q = volumetric flow rate,
- jm = mass flux.
The above equation is only true for a flat, plane area. In general, including cases where the area is curved, the equation becomes a surface integral:
The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. E.g. for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, A, and a unit vector normal to the area, . The relation is .
The reason for the dot product is as follows. The only mass flowing through the cross-section is the amount normal to the area, i.e. parallel to the unit normal. This amount is:
where θ is the angle between the unit normal and the velocity of mass elements. The amount passing through the cross-section is reduced by the factor , as θ increases less mass passes through. All mass which passes in tangential directions to the area, that is perpendicular to the unit normal, doesn't actually pass through the area, so the mass passing through the area is zero. This occurs when θ = π/2:
These results are equivalent to the equation containing the dot product. Sometimes these equations are used to define the mass flow rate.
In elementary classical mechanics, mass flow rate is encountered when dealing with objects of variable mass, such as a rocket ejecting spent fuel. Often, descriptions of such objects erroneously invoke Newton's second law F =d(mv)/dt by treating both the mass m and the velocity v as time-dependent and then applying the derivative product rule. A correct description of such an object requires the application of Newton's second law to the entire, constant-mass system consisting of both the object and its ejected mass.
Analogous quantities
In hydrodynamics, mass flow rate is the rate of flow of mass. In electricity, the rate of flow of charge is electric current.
See also
- Continuity equation
- Fluid dynamics
- Mass flow controller
- Mass flow meter
- Mass flux
- Orifice plate
- Thermal mass flow meter
- Volumetric flow rate
- Fluid Mechanics, M. Potter, D.C. Wiggart, Schuam's outlines, McGraw Hill (USA), 2008, ISBN 978-0-07-148781-8
- Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition, 1978, John Murray, ISBN 0-7195-3382-1
- Halliday; Resnick. Physics 1. p. 199. ISBN 0-471-03710-9. "It is important to note that we cannot derive a general expression for Newton's second law for variable mass systems by treating the mass in F = dP/dt = d(Mv) as a variable. [...] We can use F = dP/dt to analyze variable mass systems only if we apply it to an entire system of constant mass having parts among which there is an interchange of mass." [Emphasis as in the original] | http://en.wikipedia.org/wiki/Mass_flow_rate | 13 |
150 | What is PERFECT SQUARE TRINOMIAL DEFINITION?
A perfect square trinomial is looking for compatible factors that would fit in the last term when multiplied and in the second term if added/subtracted (considering the signs ...
Definition of Perfect Square. Perfect square is a number that has a rational number as its square root. ... More about Perfect Square . Perfect Square trinomial: Trinomials of the form a 2 ± 2ab + b 2, which can be expressed as squares of binomials are called perfect square trinomials.
Best Answer: a perfect square trinomial is one that factors into two identical factors ex: x^+6x+9=(x+3)(x+3)=(x+3)^2 there is no such thing as a perfect square binomial; but ...
Question:i am supposed to somehow find out the definition of a perfect square binomial and a perfect square trinomial. but i have looked at google, ask, wiki, looked in books in my class, and asked my friends but they dont know either. so can someone please tell me the definition of a perfect ...
In mathematics, factorization (also factorisation in British English) or factoring is the decomposition of an object (for example, a number, a polynomial, or a matrix) into a product of other objects, or factors, which when multiplied together give the original. For example, the number 15 ...
Definition of Trinomial. A trinomial is a polynomial with three terms. ... Step 5: 16 should be added to the expression, x 2 + 8x, to create a perfect square trinomial. Related Terms for Trinomial . Polynomial : Additional Links Polynomial .
Factoring perfect square trinomials Before we explain the straightforward way of factoring perfect square trinomials, we need to define the expression perfect square trinomial
A trinomial that is the square of a binomial is called a TRINOMIAL SQUARE. Trinomials that are perfect squares factor into either the square of a sum or the square of a difference. Recalling that (x + y)2 = x2 + 2xy + y2 and (x - y)2 = x2 - 2xy + y2, the form of a trinomial square is apparent.
18. THE SQUARE OF A BINOMIAL Perfect square trinomials. The square numbers. The square of a binomial. Geometrical algebra. 2nd level (a + b)³. The square of a trinomial
Explain what a perfect square trinomial is without using a polynomial to illustrate your definition.? 4 years ago; Report Abuse
Definition of factoring trinomials and related terms and concepts. My Account; Sign Out; Chegg.com. Login; Sign Up; Help; Cart ; ... Perfect Square Trinomial: or ; Factoring a Difference of Two Squares: Related Questions (3) Abbey Witwer asked.
Comments Video Transcript. Hello my name is Whitney and this video is going to define two types of perfect square trinomials. The first type is going to be written in the form of A plus B all quantity squared and the second type is going to be written in A minus B, quantity squared.
Math explained in easy language, plus puzzles, games, quizzes, worksheets and a forum. For K-12 kids, teachers and parents.
a trinomial name, as Rosa gallica pumila. Origin: 1665–75; tri-+ (bi)nomial. Related forms. tri·no·mi·al·ly, adverb . ... Perfect square trino... Nearby Words. trinitrotoluol. trinity. trinity brethre... trinity house. trinity river. trinity sun day. trinity sun-day.
What is A PERFECT SQUARE TRINOMIAL ANSWERS? Mr What will tell you the definition or meaning of What is A PERFECT SQUARE TRINOMIAL ANSWERS
Definition of Trinomial. ... to create a perfect square trinomial. Related Terms for Trinomial. Polynomial . Additional Links for Trinomial. Click here for samples; Back to Mathematics Dictionary ...
Trinomial Definition Back to Top. A polynomial consisting of 3 terms is a trinomial, such as ax 2 + bx + c where ax 2, bx, and c are 3 terms. For Example: 3x + 4y 2 - 1, 5m - 7n + x 2 are the trinomials. ... Perfect square trinomial is the product of two identical binomials.
How to Factor Trinomials With Perfect Squares. ... Although this definition may seem confusing, perfect square trinomials are actually easy to recognize, because both their first and last terms are perfect squares.
Acronym Definition; PST: Pacific Standard Time (GMT-0800) PST: Personal Folder Storage (file extension; Microsoft Outlook) PST: Provincial Sales Tax: PST: Petroleum Storage Tank
Perfect square trinomials are the trinomials of the form a2 ± 2ab + b2, which can be articulated as squares of binomials. ... Definition of Perfect Square Trinomial. Steps to Solving Perfect Square Trinomial. Create Perfect Trinomial Square.
These perfect square trinomials are a special case of a trinomial that results from the square of a binomial. ... So if you're given something like this and asked to factor it or if you see the words "perfect square trinomial" think about this definition right here.
a 2 x 2 + 2abx + b 2 where a and b are any integers.
Illustrated Math Dictionary - Definition of Perfect Square ... Perfect Square: A number made by squaring a whole number. 16 is a perfect square because 4 2 = 16
What is A PERFECT TRINOMIAL SQUARE EXAMP? Mr What will tell you the definition or meaning of What is A PERFECT TRINOMIAL SQUARE EXAMP
Definition of factoring polynomials and related terms and concepts. My Account; Sign Out; Chegg.com. Login; ... Finding the difference of perfect squares: ... Factoring quadratic trinomials:
Home. Perfect square trinomials. Lesson 18, Level 2. Back to Level 1 (a + b) ³. The square of a trinomial. Completing the square. Geometrical algebra. Problem 8.
In elementary algebra, a trinomial is a polynomial consisting of three terms or monomials. A trinomial equation is a polynomial equation involving three terms. An example is the equation studied by Johann Heinrich Lambert in the 18th century.
Best Answer: the definition of a perfect square binomial is that the sum of the middle term is zero. the formula is a^2-b^2 or (a-b)(a+b). the definition of a perfect square ...
Perfect Trinomial Square Perfect Trinomial Square Perfect unison Perfect unison Perfect unison Perfect use Perfect use Perfect use Perfect use Perfect vacuum Perfect vacuum Perfect Vision Perfect Wedding Guide Perfect Wedding Planner Perfect Whole Foods Diet .
Acronym Definition; PTS: Points: PTS: Practical Test Standards: PTS: Parts: PTS: Parking and Transportation Services: PTS: Patients: PTS: Princeton Theological Seminary (New Jersey)
Trinomials that are perfect squares factor into either the square of a sum or the square of a difference. ... Definition of Perfect Square Trinomial. Steps to Solving Perfect Square Trinomial. Create Perfect Trinomial Square. Free Factoring Trinomial Calculators Online.
Algebra Help: Factoring Perfect Square Trinomials ... Factoring Perfect Square Trinomials . In this lesson, we will learn how to factor perfect square trinomials.
Mathematics > General Math ... Okay so you want to find the square of this binomial (8x + 2)^2. The square of it is ... your book is definately not unique in calling it ...
perfect trinomial square [¦pər·fikt trī¦nō·mē·əl ′skwer] (mathematics) A trinomial that is the exact square of a binomial.
What is a perfect square trinomials? ChaCha Answer: X + 10x + 25 is called a perfect square trinomial. It is the square of a binomial.
Definition of a quadratic trinomial. Factorising quadratic trinomials. Factorizing quadratic trinomials.
Definitions: A quadratic trinomial is an expression of the form: a x 2 + b x + c, ... In this section we learn how to factor perfect square quadratic trinomials. These are quadratic trinomials that can be written as the square of some expression.
Examples of Perfect Square Trinomials Given below are some examples on perfect square trinomials. Example 1: Express 25x 2 - 60xy +36y 2 as a square of a Binomial. Solution: 25x 2 - 60xy +36y 2 = (5x) 2 - 2(5x)(6y) + (6y) 2 (Here, a = 5x, b = 6y and the middle term is negative)
Perfect Square Trinomial Calculator is an online tool which makes calculations easy and fast. Try our free Perfect Square Trinomial Calculator understand the various steps involved in solving problems and work on examples based on the concept you need to u
We would like to show you a description here, but the site you’re looking at won't allow us.
Perfect Square Trinomial is the product of two binomials. But, both the binomials are same. When factoring some quadratics which gives identical factors, that quadratics are Perfect Square Trinomial. ... Perfect Square Trinomial Definition Back to Top.
What is an example of a perfect square trinomial? Q: In: SciTech › Math › Square. Rate This Answer. Older Answers ; An example would be (8x + 2)^2. The answer would be 64x^2 + 32x + 4 ... Define; Advertisement. Home ...
A perfect square trinomial is mathematical equation, ... Definition of Perfect Square Trinomial. Steps to Solving Perfect Square Trinomial. Create Perfect Trinomial Square. Free Factoring Trinomial Calculators Online Finding Perfect Square. List of Perfect Squares in Math
... the technique of completing the square, we must define a perfect square trinomial. Perfect Square Trinomial. What happens when you square a ...
Note that this is a perfect square trinomial in which a 2x and b 3y. In factored form, we have 4x2 12xy 9y2 (2x 3y)2 Factor the trinomial 16u2 24uv 9v2. CHECK YOURSELF 6 Recognizing the same pattern can simplify the process of factoring perfect square trinomi-
Acronym Finder: PTS stands for Perfect Trinomial Square (mathematics). This definition appears very rarely
Before considering the technique of completing the square, we must define a perfect square trinomial. Perfect Square Trinomial. What happens when you square a binomial?
definition non perfect square Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia. Square wave. A square wave is a ... Answers:a perfect square trinomial is one that factors into two identical factors ex: x^+6x+9=(x+3)(x+3)= ...
What is the definition of Differencen of squares? quadratric equations ; polynomials ; solving rational equations ; ... , perfect trinomial square k, complex fraction solver, algebra for dum kids. How to work out functions in math, free 7th grade eog ...
The trinomial is a perfect square. What binomial's square equal this? 2. What's the degree of the trinomial ? This is a page from the dictionary MATH SPOKEN HERE!, published in 1995 by MATHEMATICAL CONCEPTS, inc., ISBN: 0-9623593-5-1. You ...
If you didn't find what you were looking for you can always try Google Search
Add this page to your blog, web, or forum. This will help people know what is What is PERFECT SQUARE TRINOMIAL DEFINITION | http://mrwhatis.com/perfect-square-trinomial-definition.html | 13 |
56 | |This Chapter uses voltage/time, V/t, graphs to describe the charactersitics of different types of signals. The Practical introduces the oscilloscope, a key instrument for measuring and displaying V/t graphs.|
|Introducing signals||Making waves|
|Sine waves||Other signals|
|Listening to waves||LINKS . . .|
Back to Contents
In electronic circuits things happen. Voltage/time, V/t, graphs provide a useful method of describing the changes which take place.
The diagram below shows the V/t graph which represents a DC signal:
This is a horizontal line a constant distance above the X-axis. In many circuits, fixed DC levels are maintained along power supply rails, or as reference levels with which other signals can be compared.
Compare this graph with the V/t graphs for several types of alternating, or AC, signals:
As you can see, the voltage levels change with time and alternate between positive values (above the X-axis) and negative values (below the X-axis). Signals with repeated shapes are called waveforms and include sine waves, square waves, triangular waves and sawtooth waves. A distinguishing feature of alternating waves is that equal areas are enclosed above and below the X-axis.
A sine wave has the same shape as the graph of the sine function used in trigonometry. Sine waves are produced by rotating electrical machines such as dynamos and power station turbines and electrical energy is transmitted to the consumer in this form. In electronics, sine waves are among the most useful of all signals in testing circuits and analysing system performance.
Look at the sine wave in more detail:
The terms defined below are needed to describe sine waves and other waveforms precisely:
1. Period: T : The period is the time taken for one complete cycle of a repeating waveform. The period is often thought of as the time interval between peaks, but can be measured between any two corresponding points in successive cycles.
2. Frequency: f : This is the number of cycles completed per second. The measurement unit for frequency is the hertz, Hz. 1 Hz = 1 cycle per second. If you know the period, the frequency of the signal can be calculated from:
Conversely, the period is given by:
Signals you are likely to use vary in frequeny from about 0.1 Hz, through values in kilohertz, kHz (thousands of cycles per second) to values in megahertz, MHz (millions of cycles per second).
3. Amplitude: In electronics, the amplitude, or height, of a sine wave is measured in three different ways. The peak amplitude, Vp , is measured from the X-axis, 0 V, to the top of a peak, or to the bottom of a trough. (In physics 'amplitude' usually refers to peak amplitude.) The peak-to-peak amplitude, Vpp , is measured between the maximum positive and negative values. In practical terms, this is often the easier measurement to make. Its value is exactly twice Vp .
Although peak and peak-to-peak values are easily determined, it is often more useful to know the root mean square, or rms amplitude of the wave, where:
What is rms amplitude and why is it important?
|KEY POINT:||The rms amplitude is the DC voltage which will deliver the same average power as the AC signal.|
To understand this, think about two lamps connected to alternative power supplies:
The brightness of the lamp illuminated from the AC supply looks constant but the current flowing in the lamp is changing all the time and alternates in direction, flowing first one way and then the other. There is no current flowing at the instant that the AC signal crosses the X-axis. What you see is the average brightness produced by the AC signal.
The second lamp is illuminated from a DC supply and its brightness really is constant because the current flowing is always the same. It is obviously possible to adjust the voltage of the DC supply until the two lamps are equally bright. When this happens, the DC supply is providing the same average power as the AC supply. At this point, the DC voltage is equal to the Vrms value for the AC signal.
A bit of mathematics is needed to explain why the equivalent DC value is called the root mean square value. If you want to know about this click here. What is important at this stage is to remember that the AC signal and its rms equivalent provide the same average power.
4. Phase: It is sometimes useful to divide a sine wave into degrees, ░ , as follows:
Remember that sine waves are generated by rotating electrical machines. A complete 360░ turn of the voltage generator corresponds to one cycle of the sine wave. Therefore 180░ corresponds to a half turn, 90░ to a quarter turn and so on. Using this method, any point on the sine wave graph can be identified by a particular number of degrees through the cycle.
If two sine waves have the same frequency and occur at the same time, they are said to be in phase:
On the other hand, if the two waves occur at different times, they are said to be out of phase. When this happens, the difference in phase can be measured in degrees, and is called the phase angle, . As you can see, the two waves in part B are a quarter cycle out of phase, so the phase angle = 90░.
|Up||Go to Checkpoint|
It can be helpful in understanding what is meant by 'frequency' and 'amplitude' to compare the sounds produced when different waves are played through a loudspeaker.
Not all frequencies are audible. The hi-fi range is defined as from 20 Hz to 20 kHz, approximately the same as the range of frequencies which can normally be detected. As you get older, you will find it more and more difficult to hear higher frequencies. Experience suggests that, by the time you are able to afford a decent hi-fi system, you will probably be unable to fully appreciate its performance.
|The pitch of a musical note is the same as its frequency
The intensity or loudness of a musical note is the same as its amplitude
Your ears are particularly sensitive to sounds in the middle range, from about 500 Hz to 2 kHz, corresponding with the range of frequencies found in human speech. Telephone systems have a poor high frequency performance but do work effectively in this middle range.
When you design an alarm system with an audible output, it is important to keep the frequency of the alarm sounds within this middle range.
The graphs below show waveforms of different frequency and amplitude. Click on the button below each graph to listen to the corresponding sounds:
|Play sound||Play sound|
|Play sound||Play sound|
|Play sound||Play sound|
These sine wave signals produce a 'pure' sounding tone. If the amplitude is increased, the sound is louder. If the frequency is increased, the pitch of the sound is higher.
Other shapes of signal generate sounds with the same fundamental pitch, but can sound different. Compare the sine wave sounds with square signals at 500 Hz and 1 kHz:
|Play sound||Play sound|
The square wave sound is harsher because the signal contains additional frequencies which are multiples of the fundamental frequency. These additional frequencies are called harmonics. Sounds from different musical instruments are distinguished by their harmonic content.
Sine waves can be mixed with DC signals, or with other sine waves to produce new waveforms. Here is one example of a complex waveform:
'Complex' doesn't mean difficult to understand. A waveform like this can be thought of as consisting of a DC component with a superimosed AC component. It is quite easy to separate these two components using a capacitor, as will be explained in Chapter 5.
More dramatic results are obtained by mixing a sine wave of a particular frequency with exact multiples of the same frequency, in other words, by adding harmonics to the fundamental frequency. The V/t graphs below show what happens when a sine wave is mixed with its 3rd harmonic (3 times the fundamental frequency) at reduced amplitude, and subsequently with its 5th, 7th and 9th harmonics:
As you can see, as more odd harmonics are added, the waveform begins to look more and more like a square wave.
This surprising result illustrates a general principle first formulated by the French mathematician Joseph Fourier, namely that any complex waveform can be built up from a pure sine waves plus particular harmonics of the fundamental frequency. Square waves, triangular waves and sawtooth waves can all be produced in this way.
|Up||Go to Checkpoint|
This part of the Chapter outlines the other types of signal you are going to meet. Circuits which generate these signals are versatile building blocks and many practical examples are given later in Design Electronics.
1. Square waves: Like sine waves, square waves are described in terms of period, frequency and amplitude:
Peak amplitude, Vp , and peak-to-peak amplitude, Vpp , are measured as you might expect. However, the rms amplitude, Vrms , is greater than that of a sine wave. Remember that the rms amplitude is the DC voltage which will deliver the same power as the signal. If a square wave supply is connected across a lamp, the current flows first one way and then the other. The current switches direction but its magnitude remains the same. In other words, the square wave delivers its maximum power throughout the cycle so that Vrms is equal to Vp . (If this is confusing, don't worry, the rms amplitude of a square wave is not something you need to think about very often.)
Although a square wave may change very rapidly from its minimum to maximum voltage, this change cannot be instaneous. The rise time of the signal is defined as the time taken for the voltage to change from 10% to 90% of its maximum value. Rise times are usually very short, with durations measured in nanoseconds (1 ns = 10-9 s), or microseconds (1 Ás = 10-6 s), as indicated in the graph.
2. Pulse waveforms: Pulse waveforms look similar to square waves, excpet that all the action takes place above the X-axis. At the beginning of a pulse, the voltage changes suddenly from a LOW level, close to the X-axis, to a HIGH level, usually close to the power supply voltage:
Sometimes, the 'frequency' of a pulse waveform is referred to as its repetition rate. As you would expect, this means the number of cycles per second, measured in hertz, Hz.
The HIGH time of the pulse waveform is called the mark, while the LOW time is called the space. The mark and space do not need to be of equal duration. The mark space ratio is given by:
A mark space ratio = 1.0 means that the HIGH and LOW times are equal, while a mark space ratio = 0.5 indicates that the HIGH time is half as long as the LOW time:
A mark space ratio of 3.0 corresponds to a longer HIGH time, in this case, three times as long as the space.
Another way of describing the same types of waveform uses the duty cycle, where:
When the duty cycle is less than 50%, the HIGH time is shorter than the LOW time, and so on.
A subsystem which produces a continuous series of pulses is called an astable. Chapter ? describes pulse waveforms in more detail and explains how to build a variety of astable circuits. As you will discover, it is useful to be able to change the duration of the pulse to suit particular applications. Other pulse-producing subsystems include monostables, Chapter ?, and bistables, Chapter ?.
3. Ramps: A voltage ramp is a steadily increasing or decreasing voltage, as shown below:
The ramp rate is measured in units of volts per second, V/s. Such changes cannot continue indefinitely, but stop when the voltage reaches a saturation level, usually close to the power supply voltage. Ramp generator circuits are described in Chapter ?.
4. Triangular and sawtooth waves: These waveforms consist of alternate positive-going and negative-going ramps. In a triangular wave, the rate of voltage change is equal during the two parts of each cycle, while in a sawtooth wave, the rates of change are unequal (see graph at the beginning of the Chapter). Sawtooth generator circuits are an essential building block in oscilloscope and television systems.
5. Audio signals: As already mentioned, sound frequencies which can be detected by the human ear vary from a lower limit of around 20 Hz to an upper limit of about 20 kHz. A sound wave amplified and played through a loudspeaker gives a pure audio tone. Audio signals like speech or music consist of many different frequencies. Sometimes it is possible to see a dominant frequency in the V/t graph of a musical signal, but it is clear that other frequencies are present.
6. Noise: A noise signal consists of a mixture of frequencies with random amplitudes:
Noise can originate in various ways. For example, heat energy increase the random motion of electrons and results in the generation of thermal noise in all components, although some components are 'noisier' than others. Additional sources of noise include radio signals, which are detected and amplified by many circuits, not just by radio receivers. Interference is caused by the switching of mains appliances, and 'spikes' and 'glitches' are caused by rapid changes in current and voltage elesewhere in an electronic system.
Designers try to eliminate or reduce noise in most circuits, but special noise generators are used in electronic music synthesisers and for other musical effects.
|Up||Go to Checkpoint|
Click on the icon to transfer to the WWW pages:
Heinrich Hertz: brief biography
Joseph Fourier: biography
Demonstrations in auditory perception from McGill University:
Fascinating information on the perception of speech:
|Up||Back to Contents| | http://www.doctronics.co.uk/signals.htm | 13 |
113 | Induction cooking uses induction heating to directly heat a cooking vessel, as opposed to using heat transfer from electrical coils or burning gas as with a traditional cooking stove. For nearly all models of induction cooktop, a cooking vessel must be made of a ferromagnetic metal, or placed on an interface disk which enables non-induction cookware to be used on induction cooking surfaces.
In an induction cooker, a coil of copper wire is placed underneath the cooking pot. An alternating electric current flows through the coil, which produces an oscillating magnetic field. This field induces an electric current in the pot. Current flowing in the metal pot produces resistive heating which heats the food. While the current is large, it is produced by a low voltage.
An induction cooker is faster and more energy-efficient than a traditional electric cooking surface. It allows instant control of cooking energy similar to gas burners. Other cooking methods use flames or red-hot heating elements; induction heating heats only the pot. Because the surface of the cook top is heated only by contact with the vessel, the possibility of burn injury is significantly less than with other methods. The induction effect does not directly heat the air around the vessel, resulting in further energy efficiencies. Cooling air is blown through the electronics but emerges only a little warmer than ambient temperature.
The magnetic properties of a steel vessel concentrate the induced current in a thin layer near its surface, which makes the heating effect stronger. In non-magnetic materials like aluminum, the magnetic field penetrates too far, and the induced current encounters little resistance in the metal. At least one high-frequency "all metal" cooker is available, that works with lower efficiency on non-magnetic metal cookware.
An induction cooker transfers electrical energy by induction from a coil of wire into a metal vessel that must be ferromagnetic. The coil is mounted under the cooking surface, and a large alternating current is passed through it. The current creates a changing magnetic field. When an electrically conductive pot is brought close to the cooking surface, the magnetic field induces an electrical current, called an "eddy current", in the pot. The eddy current, flowing through the electrical resistance, produces heat; the pot gets hot and heats its contents by heat conduction.
The cooking vessel is made of stainless steel or iron. The increased magnetic permeability of the material decreases the skin depth, concentrating the current near the surface of the metal, and so the electrical resistance will be further increased. Some energy will be dissipated wastefully by the current flowing through the resistance of the coil. To reduce the skin effect and consequent heat generation in the coil, it is made from litz wire, which is a bundle of many smaller insulated wires in parallel. The coil has many turns, while the bottom of the pot effectively forms a single shorted turn. This forms a transformer that steps down the voltage and steps up the current. The resistance of the pot, as viewed from the primary coil, appears larger. In turn, most of the energy becomes heat in the high-resistance steel, while the driving coil stays cool.
The cooking surface is made of a glass-ceramic material which is a poor heat conductor, so only a little heat is lost through the bottom of the pot. In normal operation the cooking surface stays cool enough to touch without injury after the cooking vessel is removed.
Units may have one, two, three, four or five induction zones, but four (normally in a 30-inch-wide unit) is the most common in the US and Europe. Two coils are most common in Hong Kong and three are most common in Japan. Some have touch-sensitive controls. Some induction stoves have a memory setting, one per element, to control the time that heat is applied. At least one manufacturer makes a "zoneless" induction cooking surface with multiple induction coils. This allows up to five utensils to be used at once anywhere on the cooking surface, not just on pre-defined zones.
Small stand-alone portable induction cookers are relatively inexpensive, priced from around US$20 in some markets.
Cookware for an induction cooking surface will be generally the same as used on other stoves. Some cookware or packaging is marked with symbols to indicate compatibility with induction, gas, or electric heat. Induction cooking surfaces work well with any pans with a high ferrous metal content at the base. Cast iron pans and any black metal or iron pans will work on an induction cooking surface. Stainless steel pans will work on an induction cooking surface if the base of the pan is a magnetic grade of stainless steel. If a magnet sticks well to the sole of the pan, it will work on an induction cooking surface. An "all-metal" cooker will work with non-ferrous cookware, but available models are limited.
For frying, a pan with a base that is a good heat conductor is needed to spread the heat quickly and evenly. The sole of the pan will be either a steel plate pressed into the aluminum, or a layer of stainless steel over the aluminum. The high thermal conductivity of aluminum pans makes the temperature more uniform across the pan. Stainless frying pans with an aluminum base will not have the same temperature at their sides as an aluminum sided pan will have. Cast iron frying pans work well with induction cooking surfaces but the material is not as good a thermal conductor as aluminum.
When boiling water, the circulating water spreads the heat and prevents hot spots. For products such as sauces, it is important that at least the base of the pan incorporates a good heat conducting material to spread the heat evenly. For delicate products such as thick sauces, a pan with aluminum throughout is better, since the heat flows up the sides through the aluminum, allowing the cook to heat the sauce rapidly but evenly.
Aluminum or copper alone does not work on an induction stove because of the materials’ magnetic and electrical properties. Aluminum and copper cookware are more conductive than steel, and the skin depth in these materials is larger since they are non-magnetic. The current flows in a thicker layer in the metal, encounters less resistance and so produces less heat. The induction cooker will not work efficiently with such pots.
The heat that can be produced in a pot is a function of the surface resistance. A higher surface resistance produces more heat for similar currents. This is a “figure of merit” that can be used to rank the suitability of a material for induction heating. The surface resistance in a thick metal conductor is proportional to the resistivity divided by the skin depth. Where the thickness is less than the skin depth, the actual thickness can be used to calculate surface resistance. Some common materials are listed in this table.
Relative to copper
|Carbon steel 1010||9||200||0.004||2.25||56.25|
|Stainless steel 432||24.5||200||0.007||3.5||87.5|
|Stainless steel 304||29||1||0.112||0.26||6.5|
To get the same surface resistance as with carbon steel would require the metal to be thinner than is practical for a cooking vessel; a copper vessel bottom would be 1/56th the thickness of the carbon steel pot. Since the skin depth is inversely proportional to the square root of the frequency, this suggests that much higher frequencies (say, several megahertz) would be required to obtain equivalent heating in a copper pot as in an iron pot at 24 kHz. Such high frequencies are not feasible with inexpensive power semiconductors; in 1973 the silicon-controlled rectifiers used were limited to no more than 40 kHz. Even a thin layer of copper on the bottom of a steel cooking vessel will shield the steel from the magnetic field and make it unusable for an induction top. Some additional heat is created by hysteresis losses in the pot due to its ferromagnetic nature, but this creates less than ten percent of the total heat generated.
"All metal" models
New types of power semiconductors and low-loss coil designs have made an all-metal cooker possible, but the electronic components are relatively bulky.
First patents date from the early 1900s. Demonstration stoves were shown by the Frigidaire division of General Motors in the mid-1950s on a touring GM showcase in North America. The induction cooker was shown heating a pot of water with a newspaper placed between the stove and the pot, to demonstrate the convenience and safety. This unit, however, was never put into production.
Modern implementation in the USA dates from the early 1970s, with work done at the Research & Development Center of Westinghouse Electric Corporation at Churchill Borough, near Pittsburgh, That work was first put on public display at the 1971 National Association of Home Builders convention in Houston, Texas, as part of the Westinghouse Consumer Products Division display. The stand-alone single-burner range was named the Cool Top Induction Range. It used paralleled Delco Electronics transistors developed for automotive electronic ignition systems to drive the 25 kHz current.
Westinghouse decided to make a few hundred production units to develop the market. Those were named Cool Top 2 (CT2) Induction ranges. The development work was done at the same R&D location, by a team led by Bill Moreland and Terry Malarkey. The ranges were priced at $1,500, including a set of high quality cookware made of Quadraply, a laminate of stainless steel, carbon steel, aluminum and another layer of stainless steel (outside to inside).
Production took place in 1973 through to 1975 and stopped, coincidentally, with the sale of Westinghouse Consumer Products Division to White Consolidated Industries Inc.
CT2 had four burners of about 1,600 watts each. The range top was a Pyroceram ceramic sheet surrounded by a stainless-steel bezel, upon which four magnetic sliders adjusted four corresponding potentiometers set below. That design, using no through-holes, made the range proof against spills. The electronic section was made in four identical modules cooled by fans.
In each of the electronics modules, the 240V, 60 Hz domestic line power was converted to between 20V to 200V of continuously variable DC by a phase-controlled rectifier. That DC power was in turn converted to 27 kHz 30 A (peak) AC by two arrays of six paralleled Motorola automotive-ignition transistors in a half-bridge configuration driving a series-resonant LC oscillator, of which the inductor component was the induction-heating coil and its load, the cooking pan. The circuit design, largely by Ray Mackenzie, successfully dealt with certain bothersome overload problems.
Control electronics included functions such as protection against over-heated cook-pans and overloads. Provision was made to reduce radiated electrical and magnetic fields. There was also magnetic pan detection.
CT2 was UL Listed and received Federal Communications Commission (FCC) approval, both firsts. Numerous patents were also issued. CT2 won several awards, including Industrial Research Magazine's IR-100 1972 best-product award and a citation from the United States Steel Association. Raymond Baxter demonstrated the CT2 on the BBC series Tomorrow's World. He showed how the CT2 could cook through a slab of ice.
Sears Kenmore sold a free-standing oven/stove with four induction-cooking surfaces in the mid-1980s (Model Number 103.9647910). The unit also featured a self-cleaning oven, solid-state kitchen timer and capacitive-touch control buttons (advanced for its time). The units were more expensive than standard cooking surfaces.
In 2009 Panasonic developed an all-metal induction cooker that used a different coil design and a higher operating frequency to allow operation with non-ferrous metal cookware. However, the units operate with somewhat reduced coupling efficiency and so have reduced power compared to operation with ferrous cookware.
Induction equipment may be a built-in surface, part of a range, or a standalone surface unit. Built-in and rangetop units typically have multiple elements, the equivalent of separate burners on a gas-fueled range. Stand-alone induction modules are usually single-element, or sometimes have dual elements. All such elements share a basic design: an electromagnet sealed beneath a heat-resisting glass-ceramic sheet that is easily cleaned. The pot is placed on the ceramic glass surface and begins to heat up, along with its contents.
In Japan, some models of rice cookers are powered by induction. In Hong Kong, power companies list a number of models. Asian manufacturers have taken the lead in producing inexpensive single-induction-zone surfaces; efficient, low-waste-heat units are advantageous in densely populated cities with little living space per family, as many Asian cities are. Induction cookers are less frequently used in other parts of the world.
Induction ranges may be applicable in commercial restaurant kitchens. Electric cooking avoids the cost of natural gas piping and in some jurisdictions may allow simpler ventilation and fire suppression equipment to be installed. Drawbacks for commercial use include possible breakages of the glass cook-top, higher initial cost and the requirement for magnetic cookware.
This form of flameless cooking has certain advantages over conventional gas flame and electric cookers, as it provides rapid heating, improved thermal efficiency, and greater heat consistency, yet with precise control similar to gas. In situations in which a hotplate would typically be dangerous or illegal, an induction plate is ideal, as it creates no heat itself.
The high efficiency of power transfer into the cooking vessel makes heating food faster on an induction cooking surface than on other electric cooking surfaces. Because of the high efficiency, an induction element has heating performance comparable to a typical consumer-type gas element, even though the gas burner would have a much higher power input.
Induction cookers are safer to use than conventional cookers because there are no open flames. The surface below the cooking vessel is no hotter than the vessel; only the pan generates heat. The control system shuts down the element if a pot is not present or not large enough. Induction cookers are easy to clean because the cooking surface is flat and smooth, even though it may have several heating zones. Since the cooking surface is not directly heated, spilled food does not burn on the surface.
Since heat is being generated by an induced electric current, the unit can detect whether cookware is present (or whether its contents have boiled dry) by monitoring how much power is being absorbed. That allows functions such as keeping a pot at minimal boil or automatically turning an element off when cookware is removed.
Because the cook top is shallow compared to a gas-fired or electrical coil cooking surface, wheelchair access can be improved; the user's legs can be below the counter height and the user's arms can reach over the top.
Cookware must be compatible with induction heating; glass and ceramics are unusable, as are solid copper or solid aluminum cookware for most models of cooker. Cookware must have a flat bottom since the magnetic field drops rapidly with distance from the surface. (Special and costly wok-shaped units are available for use with round-bottom woks.) Induction disks are metal plates--much like a skillet with no sides--that heat up non-ferrous pots by contact, but these sacrifice much of the power and efficiency of direct use of induction in a compatible cooking vessel.
Manufacturers advise consumers that the glass ceramic top can be damaged by impact, although cooking surfaces are required to meet minimal product safety standards for impact. Aluminum foil can melt onto the top and cause permanent damage or cracking of the top. Damage by impact also relates to sliding pans across the cooking surface, which users are advised against. As with other electric ceramic cooking surfaces there may be a maximum pan size allowed by the manufacturer.
A small amount of noise is generated by an internal cooling fan. Audible noise (a hum or buzz) may be produced by cookware exposed to high magnetic fields, especially at high power if the cookware has loose parts; better-grade cookware, with welded-in cladding layers and solid rivetting, should not manifest such noises. Some users may detect a whistle or whine sound from the cookware, or from the power electronic devices. Some cooking techniques available when cooking over a flame are not applicable. Persons with implanted cardiac pacemakers or other electronic medical implants may be advised by their doctors to avoid proximity to induction cooking surfaces and other sources of magnetic fields. Radio receivers near the unit may pick up some electromagnetic interference.
An induction (or any electric) stove will not operate during a power outage. Older gas-stoves do not need electric power to operate; however, modern gas-stoves with electrical ignition require an external ignition source (e.g. matches) during power outages.
Efficiency and environmental impact
According to the U.S. Department of Energy, the efficiency of energy transfer for an induction cooker is 84%, versus 74% for a smooth-top non-induction electrical unit, for an approximate 12% saving in energy for the same amount of heat transfer.
Energy efficiency is the ratio between energy delivered to the food and that consumed by the cooker, considered from the "customer side" of the energy meter. Cooking with gas has an energy efficiency of about 40% at the customer's meter and can be raised only by using very special pots, so the DOE efficiency value will be used.
When comparing consumption of energies of different kinds, in this case natural gas and electricity, the method used by the US Environmental Protection Agency refers to source (also called primary) energies. They are the energies of the raw fuels that are consumed to produce the energies delivered on site. The conversion to source energies is done by multiplying site energies by appropriate source-site ratios. Unless there are good reasons to use custom source-site ratios (for example for non US residents or on-site solar), EPA states that "it is most equitable to employ national-level ratios". These ratios amount to 3.34 for electricity purchased from the grid, 1.0 for on-site solar, and 1.047 for natural gas. The natural gas figure is slightly greater than 1 and mainly accounts for distribution losses. The energy efficiencies for cooking given above (84% for induction and 40% for gas) are in terms of site energies at the customer's meters. The (US averaged) efficiencies recalculated relative to source fuels energies are hence 25% for induction cooking surfaces using grid electricity, 84% for induction cooking surfaces using on-Site Solar, and 38% for gas burners.
Source-site ratios are not formalized yet in Western Europe. A common consensus should arise on unified European ratios in view of the extension of the Energy Label to domestic water heaters. Unofficial figures for European source-site ratios are about 2.2 for electricity, 1.0 for on-site solar, and 1.02 for natural gas, thus giving overall (referred to source energy) efficiencies of 38% and 84% for induction cooking surfaces (depending on source electricity) and 39% for gas burners.
These provisional figures need to be somehow adjusted due to the higher gas burner efficiency, allowed in Europe by a less stringent limit on carbon monoxide emission at the burner. European and US standards differ in test conditions. The US ANSI Z21.1 standard allows a lower concentration of carbon monoxide (0.08%), compared to the European standard EN 30-1-1 which allows 0.2%. The minimum gas burner efficiency required in the EU by EN 30-2-1 is 52%, higher than the average 40% efficiency measured in US by DOE. The difference is mainly due to the weaker CO emission limit in EU, that allows more efficient burners, but also due to different ways in which the efficiency measurements are performed.
Whenever local electricity emits less than 435 grams of CO2 per kWh, the greenhouse effect of an induction cooker will be lower than that of a gas cooker. This again comes from the relative efficiencies (84% and 40%) of the two surfaces and from the standard 200 (±5) grams CO2/kWh emission factor for combustion of natural gas at its net (low) calorific value.[improper synthesis?]
Gas cooking efficiencies may be lower if waste heat generation is taken into account. Especially in restaurants, gas cooking can significantly increase the ambient temperature in localized areas. Not only may extra cooling be required but zoned venting may be needed to adequately condition hot areas without overcooling other areas. Costs must be considered on an individual situation due to numerous variables in temperature differences, facility layout or openness, and heat generation schedule. Induction cooking using grid electricity may surpass gas efficiencies when waste heat and air comfort are quantified.
|This section does not cite any references or sources. (December 2011)|
The market for induction stoves is dominated by German manufacturers, such as AEG, Bosch, Fissler, Miele and Siemens. The Spanish company Fagor, Italian firm Smeg and Sweden's Electrolux are also key players in the European market. Prices range from about GB£250 to 1,000 within the United Kingdom. In 2006, Stoves launched the UK's first domestic induction range cooker at a slightly lower cost than those imported.
The European induction cooking market for hotels, restaurants and other caterers is primarily satisfied by smaller specialist commercial induction catering equipment manufacturers such as Adventys of France, Control Induction and Target Catering Equipment of the UK and Scholl of Germany.
Taiwanese and Japanese electronics companies are the dominant players in induction cooking for East Asia. After aggressive promotions by utilities in HK like Power HK Ltd, many local brands like UNIVERSAL, icMagIC, Zanussi, iLighting, German Pool also emerged. Their power and ratings are high, more than 2,800 watts. They are multiple zone and capable of performing better than their gas counterpart. The efficiency is as high as 90% and saves a lot of energy and is environmentally friendly. Their use by local Chinese for wok cooking is becoming popular. Some of these companies have also started marketing in the West. However, the product range sold in Western markets is a subset of that in their domestic market; some Japanese electronics manufacturers only sell domestically.
In the United States, as of early 2013 there are over five dozen brands of induction-cooking equipment available, including both build-in and countertop residential equipment and commercial-grade equipment. Even restricting to build-in residential-use units, there are over two dozen brands being sold; residential countertop units add another two-dozen-plus brands to the count.
The National Association of Home Builders in 2012 estimated that, in the United States, induction cooktops held only 4% of sales, compared to gas and other electric cooktops.
In April of 2010, The New York Times reported that "In an independent survey last summer by the market research company Mintel of 2,000 Internet users who own appliances, only 5 percent of respondents said they had an induction range or cooktop. . . . Still, 22 percent of the people Mintel surveyed in connection to their study last summer said their next range or cooktop would be induction."
See also
- Llorente, S.; Monterde, F.; Burdio, J.M.; Acero, J. (2002). "A comparative study of resonant inverter topologies used in induction cookers". Retrieved 2009-05-20.
- http://www.dedietrich.co.uk/documents/DTiM1000C_techspecs.pdf DeDietrich "Piano" cooktop specifications, retrieved 2012 May 9
- , W. C.Moreland, The Induction Range: Its Performance and Its Development Problems, IEEE Transactions on Industry Applications, vol. TA-9, no. 1, January/February 1973 pages 81–86
- Fairchild Semiconductors (2000-07). "AN9012 Induction Heating System Topology Review". Retrieved 2009-05-20.
- Fujita, Atsushi; Sadakata, Hideki; Hirota, Izuo; Omori, Hideki; Nakaoka, Mutsuo (17-20 May 2009). "Latest developments of high-frequency series load resonant inverter type built-in cooktops for induction heated all metallic appliances". Power Electronics and Motion Control Conference, 2009. IPEMC '09. IEEE 6th International. pp. 2537–2544. doi:10.1109/IPEMC.2009.5157832. ISBN 978-1-4244-3557-9. Retrieved 28 March 2013.
- Tanuki Soup (9 October 2010). "Big news for fans of induction cooktops". Chow. Retrieved 28 March 2013.
- for example see UK Patent Application GB190612333, entitled "Improvements in or relating to Apparatus for the Electrical Production of Heat for Cooking and other purposes", applied for by Arthur F. Berry on 26 May 1906
- Kitchen of the Future has Glass-Dome Oven and Automatic Food Mixer, Popular Mechanics Apr 1956, page 88
- Induction Heat Cooking Apparatus
- Cooking vessel capacitive decoupling for induction cooking apparatus
- Induction heating coil assembly for heating cooking vessels
- Pan detector for induction heating cooking unit
- http://www.rdmag.com/RD100SearchResults.aspx?&strCompany=Westinghouse&Type=C Archive, retrieved 2012 Aug 22
- "Induction Cookers". Heh.com. Retrieved 2011-02-21.
- Roger Fields, Restaurant Success by the Numbers: A Money-Guy's Guide to Opening the Next Hot Spot, Random House of Canada, 2007 ISBN 1-58008-663-2, pp. 144–145
- "Induction Cooking: Pros and Cons". Theinductionsite.com. 2010-11-03. Retrieved 2011-12-06.
- "So How Much Power Is What?". Theinductionsite.com. 2010-03-08. Retrieved 2011-12-06.
- Hans Bach, Dieter Krause, Low thermal expansion glass ceramics, Springer, 2005 ISBN 3-540-24111-6 page 77, lists IEC, UL, Canadian, Australian and other standards with impact resistance requirements
- "Technical support document for residential cooking products. Volume 2: Potential impact of alternative efficiency levels for residential cooking products. (see Table 1.7). U.S. Department of Energy, Office of Codes and Standards." (PDF). Retrieved 2011-12-06.
- Greg Sorensen; David Zabrowski (August 2009). "Improving Range-Top Efficiency with Specialized Vessels". Appliance Magazine. Retrieved 2010-08-07.
- Understanding Source and Site Energy : ENERGY STAR. Retrieved 18 November 2011
- Energy Star (March 2011). "Methodology for Incorporating Source Energy Use".
- Advantica Limited (2000-12). "A review of gas appliance CO emissions legislation, report R4162 prepared for Health and Safety Executive, UK". Retrieved 2012-01-06.
- BIO Intelligence Service for the European Commission (DG ENER) (2011-07). "Preparatory study for Eco-design: domestic and commercial hobs and grills, included when incorporated in cookers". Retrieved 2012-01-06.
- http://www.nahb.org/generic.aspx?genericContentID=183725&fromGSA=1 Kitchen Appliance Upgrades that Shine, retrieved 2012 Aug 15
- http://www.nytimes.com/2010/04/07/dining/07induction.html?pagewanted=all Is Induction Cooking Ready to Go Mainstream? , retrieved 2013 Jan 31
|Wikimedia Commons has media related to: Induction cookers|
- Technical Support Document for Residential Cooking Products
- Video demonstrating how an induction cook top works | http://en.wikipedia.org/wiki/Induction_cooker | 13 |
71 | |I. Introduction: Backgrounds and Key concepts||8-24
Introduction : How to succeed in this course.
Sensible Precalc Ch 1.A
What are Numbers?
Beginning Functions-Linear functions and key concepts.
Number Operations, equation, inequality properties. Types of Numbers
Visualizing: numbers- intervals.
Rational numbers and decimals. See Moodle for worksheet.
The Pythagorean theorem. [Over 30 proofs
[Many Java Applets proofs ]
Sqr(2) is not a rational #.
Solving linear inequalities
Applications of linear inequalities
Simplifying and Rationalizing
Visualizing variables and plane coordinate geometry.
Introduction to Winplot. Points, Animation..
More Algebra review.
Review Polynomials. (Factoring)
Introduction to Excel.
Linear and quadratic "Functions" and Visualization of data.
What's a function?
More on functions.
Linear Functions and Visualization of data.
Slopes and equations of lines.
Y and X intercepts.
Midpoints with coordinates.
Equations for circles.
Inequalities and absolute values.
MORE on functions!
Graphs and mapping figures.
Quadratic Functions and Visualization of data.
|9-16 Quadratic Functions
Other function qualities.
Winplot:Demonstation: Lines as equations and "functions".
Exploring functions with Winplot:
Primary Descriptive features of functions. (Increasing/decreasing/max/min)
Overview of Core functions...elementary functions.
Review of Key Triangles.
Secant line slopes.
[Using winplot to graph and find key function feature and solve equations graphically for zeros.?]
Start trig. Trigonometric functions for Right Triangles
Solving Right triangles.
Triangle trig: Inverse sine, cosine, tangent for acute triangles.
|9-22 Law of sines
sine for obtuse angles.
Law of Sines.
|9-23 More on Solving
More Inverse trig (sine obtuse).
Graph piecewise functions.
Visualize triangle trig and unit circle.
|VI.Finish Triangle Trig- Trig function graphs||9-28
More on Law of Sines.
Radian measure and circles in general.
|9-29 Quiz #2
Start Law of cosines.
Trig functions for all angles - with radian measure.(sine and cosine)(tan)
More on law of cosines.
A visual proof for "The Law of Cosines"
Dynamic proof :The Law of Cosines
Applications of triangle trig
Begin Trig Identities
|10-6 Quiz #3
||10-7 Begin Graphs of trig functions. Sine and cosine.
Begin trig equations and review of inverse trig functions(Asin and Acos)
Simple use of identities: relating trig function values on unit circle.
Solving Simple Trig Equations
|VIII. More trig identities.:)||10-12 More on graphs of trig functions, identities and equations.
Graphs for tangent and secant. Graphically solving trig equations.
|10-13 Quiz #4
|IX More trig identities,
equations, and graphs!
More Trig functions and equations:
Double and half angles
Other Trig identities: Product to sum trig.
Applications of Sum to slopes and trig function graphs for sine and cosine..
Identities and Equations
Phase Shift - trig and linear compositions.
|X End of trig!
graphs and elementary functions
More on graphs and basic properties of trig functions.
.Phase Shift - trig and linear compositions. Continued!
graph A sin(BX+C)
Exponential and logs
Inverse trig functions.
More on inverse trig functions(Asin and Acos):
Graphs for inverse trig. (esp'lly Inverse tangent function)
More on inverse and trig functions.
graphs of combining trig functions.
Compound interest? What is e?
Applications of Exponential functions
Solving simple exponential equations.exponential functions and graphs.
Graphs of exponential functions.
More on Exponential Applications- compound interest and growth.Exponential graphs
exponents and Logarithmic functions.
What are elementary functions?
Logarithms: Introduction and definition.
Basic properties of logs... and applications and exponents-solving equations
Models using Exponential Functions
Continuously compounded interest: Pert.
|11-10||11-11 No Class
|XIII. Begin Polynomial and Rational Functions||11-16More
Logs and Graphs of logs, exps with graphs of trig functions
Logarithmic calculations in equations and computations. "Transforming equations."
More applications of logs/exponential functions.
log scales (simple)
||11-24 Fall Break||11-25 Fall Break|
|XIV More on rational functions.
||11-28 Long division and
The remainder Theorem.
The Factor theorem
Roots and more on Polynomials.Functions The big picture on functions: Core functions and elementary functions
Symmetry [wrt origin....(axes).]
Quadratics and 1/x.
Translation, symmetry and scales for quadratics
Difference quotients-Slopes of secant lines-
roots and positive and negative values.
Roots of polynomials in general - inequalities.
|XV Pre-Calculus!||12-4 Inequalities. Linear.
Absolute value functions and inequalities
Intermediate value theorem.
The fundamental theorem of algebra
Composition & Inverse functions
Final comments on elementary functions- algebraic, logrithmic exponential, and trignometric.
Some of my "favorite functions."
A pre-calculus view of some calculus problems.
Review Session Sunday and Room BSS TBA
Tuesday, December 14 @ 10:20 am.
|Inventory of topics and materials
Worksheet on log scales
Music and log scales
Earthquake Magnitude and logs.
On-line java sliderule
More Slide rules
More applications of logs
Trisection of angles, trig and algebra!
Complex arithmetic and trig
Properties of roots and exponents.
More on Complex Numbers, trig and roots??
or in SC on line.
|Special Instructions & Interesting but Optional|
|Sensible Precalc Ch 1.A|
Purple Math on Converting between Decimals,
More on Similar triangles.
|Dynamic Geometry® Exploration SimilarTriangles|
|Ch 1.B.1: 1c, 2, 16|
|Law of Sines.|
proof for "The
Demonstrations of the laws of sines and cosines
|History of Pi|
|sin(A+B) proof illustrated.|
|Summary of trig identities|
|graph SinAX graph A sin(BX+C)|
angles, trig and algebra!
|History of the number e|
|How and Why a Slide
On-line java sliderule
|History of the Function concept| | http://users.humboldt.edu/flashman/Courses/m115AF10.html | 13 |
53 | A packed figure which has three sides is known as triangle. Here we will see some properties of triangle. In a triangle the corner Point is known as the vertex of a triangle. In a triangle three vertices are present.
The distance measured around a triangle is known as perimeter of a triangle.
The sum of interior angles of a triangle is always 1800.
Now we will see Triangular coordinate system.
For finding the Area of Triangle when coordinates are given we need to follow some steps:
Step1: First we will find vertices of triangle.
Step2: For finding area, a formula is defined which is given below.
The formula is given below.
Area of triangle = |Fx (Gy – Fy) + Gx (Hy – Fy) + Hx (Fy – Gy)|,
Where ‘Fx’ and ‘Fy’ are the ‘x’ and ‘y’ of ‘F’ point.
Step 3: On putting values of all coordinates in formula we get area of triangle.
Suppose we have coordinates of a triangle P (3, 5), Q (5, 6) and R (10, 6), then by using these coordinates we find area of triangle.
For finding the area of triangle we need to follow above steps.
Here coordinates are F = (3, 5), G = (5, 6), H = (10, 6);
We know the formula for finding the area of triangle with vertices is given by:
Area of triangle = | Fx (Gy – Fy) + Gx (Hy – Fy) + Hx (Fy – Gy) |,
Now put the values in the given formula:
On putting the values we get:
Area of triangle = | 3 (6 – 5) + 5 (6 – 5) + 10 (5 – 6) |,
Area of triangle = | 3 (1) + 5 (1) + 10 (-1) |,
Area of triangle = | 3 + 5 + 10|,
Area of triangle = 18/2;
So the area of triangle is 9 inch2.
Area of triangle in coordinate Geometry Coming Soon..
Heron’s formula is used for calculating the Area of Triangle, it uses semi perimeter for calculation of area. Semi perimeter is given by:
S = ½ (p + q + r)
The area of the triangle by the heron s formula is given as:
A =...Read More
A packed or closed shape which contains three lines linked end to end is known as triangle. It is a kind of Polygon. There are different properties which are given below:
Three corner points are present in a tr...Read More | http://www.tutorcircle.com/triangular-coordinate-system-t9Ibp.html | 13 |
56 | Statistics - GMAT Math Study Guide
Table of Contents
- Mean (aka Average or Arithmetic Mean) - The sum of all of the numbers in a list divided by the number of items in that list.
For example, the mean of the numbers 2, 6, 13 is 7 since the sum of the numbers (2+6+13=21) divided by the number of terms (3) is 7.
- Median - The number that is located in the middle of a set of numbers when that set is ordered sequentially from the smallest to the largest.
For example, the median of the numbers 2, 6, 4, 8, 10 is 6 since the middle of the set of numbers when ordered sequentially (i.e., 2, 4, 6, 8, 10) is 6.
- Mode - The value that appears most within a set of numbers.
For example, the mode of the set composed of 2, 10, 15, 21, 10, 2, 10 is 10 since the number 10 appears most often (i.e., 3 times).
- Range - The length of the interval containing all the data points in a set (i.e., the distance from the smallest to largest data point).
For example, the range of the set 10, 15, 38, 150 is 150-10 = 140.
- Standard Deviation - A measure of how spread out or dispersed the data in a set are relative to the set's mean.
For example, a data set with a standard deviation of 10 is more spread out than a data set with a standard deviation of 5.
The formula for the mean of a list of numbers is the sum of all the numbers in the list divided by the number of numbers in the list.
A = average (or arithmetic mean)
N = the number of terms (e.g., the number of items or numbers being averaged)
S = the sum of the numbers in the set of interest (e.g., the sum of the numbers being averaged)
Consider the following example:
S = total number of books read = (2*2) + (3*2) + (1*4) = 14
N = the number of weeks = 2 + 2 + 4 = 8
A = 14/8 = 1.75
The median is the middle number in a set that is ordered sequentially (i.e., from smallest to largest). The most common mistake people make in calculating the median is forgetting to order the numbers in a set sequentially from smallest to largest before finding the middle number.
It would be a mistake to conclude that the median is 6 since we must first order the numbers from smallest to largest.
Ordered Sequentially: 2, 3, 4, 6, 9
If there is an even number of elements, then the median is the average of the two elements in the middle. For example:
The mode of a set of data is the number (or numbers) that appear most frequently in the set of data. For example:
The mode is 4 since this number appears twice (more than any other number).
It is possible to have multiple modes but impossible for a set of data to have no mode. For example:
The range is the distance from the smallest to largest number in a set. For example:
Range = largest - smallest = 180 - 10 = 170
The standard deviation measures the dispersion (i.e., the extent to which the data are spread out) relative to the mean.
While it is extremely rare that you need the formula for the standard deviation, it is offered here in order to give a better understanding of what the standard deviation of a data set measures.
(For students with a background in statistics, different notation can be used depending on whether one is calculating the population or sample standard deviation. For details on how to calculate the standard deviation and an example of finding the standard deviation of a set, please see the detailed standard deviation study guide.)
The most important part of understanding standard deviations is knowing that as the standard deviation increases, the dispersion of the data increases.
The standard deviation of the red graph is much larger than that of the blue graph.
The following is an example of how the standard deviation might be tested:
Lower Boundary: Mean - 2.5(Standard Deviation) = 10 - 2.5(1) = 7.5
Upper Boundary: Mean + 2.5(Standard Deviation) = 10 + 2.5(1) = 12.5
Answer: 7.5 to 12.5
Types of GMAT Problems
- Calculating the Mean With Large Values
Make an approximation as to what the average will be. For each number in the set of data, determine the difference from the guessed average. Sum the deviations and divide by the total number of elements in the data set to find the average difference. Add the average difference to the guessed average to determine the true average.Find the average of 111, 71, 98, 105, 92, 87Correct Answer: C
- Make an approximation as to what the average is.
Average ≅ 95
Note: There is nothing magical about choosing 95. If you choose 96 or 93 or another number, you would still arrive at the correct answer.
- Calculate the difference between 95 and each value in the set of data.
111-95 = 16
71-95 = -24
98-95 = 3
105-95 = 10
92-95 = -3
87-95 = -8
- Add all of the differences together and divide by the number of numbers in the set, 6
Sum of Differences: 16-24+3+10-3-8 = -6
Sum of Differences/Numbers: -6/6 = -1
- Add -1, the sum of differences/numbers, to 95, the initial estimate
95 + (-1) = 95-1 = 94
- In case you are struggling with the choice of 95, consider how this problem could be solved by choosing other numbers as an estimate of the mean.
Sum of Differences: -12
Sum of Differences/6: -2
Mean: 96 + (-2) = 94
Sum of Differences: -36
Sum of Differences/6: -6
Mean: 100 + (-6) = 94
- Make an approximation as to what the average is. | http://www.platinumgmat.com/gmat_study_guide/statistics | 13 |
76 | When we study Geometry, and look at the Circle, we say that the circle is a round figure, which has its boundary at equal distance from a Point called Centre of the circle. If we look at different parts of the circle we use the terms center, radius, arc, diameter, sector and a Chord. We first talk about the radius of a circle.
If we draw a line from center of the circle to the boundary of the circle,(which we call the circumference of the circle), then the length of this line is called the radius of the circle. The radius remains of fixed length, from where ever we draw it. Sector, arc, radii, diameter are parts of circle.
If we join the two radius, which moves in the opposite direction from the center of the circle is called the Diameter of the circle, which is a part of a circle. We observe that two radius join to form a diameter, so we say that the diameter = radius * 2.
Now we talk about another term related to the circle i.e. chord. We say that the chord is the Line Segment, which starts at one point on the circumference of the circle and ends at another end. There can be infinite number of chords that can be drawn inside the circle. Moreover, we observe that the diameter is also the chord, as it ends at the circumference of the circle.
We also find that the diameter is the longest chord of the circle. If any line is drawn outside the circle, which touches the circle at only one point, then it is called the Tangent drawn on the circle. Any line drawn from the centre of the circle to the Tangent is always perpendicular. So, we say that the radius is perpendicular to the circle.
Circle can be defined as closed round shaped figure. Diameter is one of the terminologies that we use for circle. Diameter of circle is defined as a line that passes from center and touches two points on boundary of circle. Often the word diameter is used to refer line itself. Diameter is twice of radius and is also called as Chord. Chord is defined as any line that joins two p...Read More
A Tangent is a line that touches the curve at one Point. Basically the word tangent came from the Latin word ‘tangere’ that means to touch. The tangent just touches the curve at a point not intersect the curve. The tangent function...Read More
Every geometric shape like Circle, triangle, rectangle, square etc has one center and when we rotate any geometric shape around its center, its orientation remains same means length of any straight line from center to its edge remains same, like we have a circle, whose radius is equal to 7 inch, then from center Point to each point of circle, the length of Straight Line is equals...Read More
In Geometry, as we know that we study about different shapes. Starting with the basic elements, viz., point, line & plane, some shapes might be rectilinear while some others curved; some might be plane figures & some others are solids. In plane figures some might be 2 dimensional while some others might be 3 dimensional. In spite of all such classifications, we start geometry with...Read More
We are already familiar with Circles & some terms associated with them. Let us recall that a Circle is a closed curve formed by joining all the points which are equidistant from a fixed Point, ‘O’ in its Centre. This fixed point is the centre of the circle.
In this session, we will learn about a new term associated with circles, i.e., sector of a circle. To understan...Read More | http://www.tutorcircle.com/parts-of-circles-t4klp.html | 13 |
54 | Science Fair Project Encyclopedia
In cosmology, the Big Bang is the scientific theory that describes the early development and shape of the universe. The central idea is that the theory of general relativity can be combined with the observations on the largest scales of galaxies receding from each other to extrapolate the conditions of the universe back or forward in time. A natural consequence of the Big Bang is that in the past the universe had a higher temperature and a higher density. The term "Big Bang" is used both in a narrow sense to refer to a point in time when the observed expansion of the universe (Hubble's law) began, and in a more general sense to refer to the prevailing cosmological paradigm explaining the origin and evolution of the universe.
The term "Big Bang" was coined in 1949 by Fred Hoyle during a BBC radio program, The Nature of Things; the text was published in 1950. Hoyle did not subscribe to the theory and intended to mock the concept.
One consequence of the Big Bang is that the conditions of today's universe are different from the conditions in the past or in the future. From this model, George Gamow in 1948 was able to predict that there should be evidence for a Big Bang in a phenomenon that would later be called the cosmic microwave background radiation (CMB). The CMB was discovered in the 1960s and served as a confirmation of the Big Bang theory over its chief rival, the steady state theory.
According to the Big Bang, 13.7 billion (13.7 × 109) years ago the universe was in an incredibly dense state with huge temperatures and pressures. There is no compelling physical model for the first 10-33 seconds of the universe. Einstein's theory of gravity predicts a gravitational singularity where densities become infinite. To resolve this paradox, a theory of quantum gravity is needed. Understanding this period of the history of the universe is one of the greatest unsolved problems in physics.
History of the theory
In 1927, the Belgian Jesuit priest Georges Lemaître was the first to propose that the universe began with the "explosion" of a "primeval atom". Earlier, in 1918, the Strasbourg astronomer Carl Wilhelm Wirtz had measured a systematic redshift of certain "nebulae", and called this the K-correction; but he wasn't aware of the cosmological implications, nor that the supposed nebulae were actually galaxies outside our own Milky Way.
Albert Einstein's theory of general relativity, developed during this time, admitted no static solutions (that is to say, the universe had to be either expanding or shrinking), a result that he himself considered wrong, and which he tried to fix by adding a cosmological constant. Applying general relativity to cosmology was first done by Alexander Friedmann whose equations describe the Friedmann-Lemaître-Robertson-Walker universe.
In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Having, in 1913, already determined that most spiral nebulae (what would later be determined to be galaxies) were receding from Earth, Hubble combined this with distance measurements determined by observing Cepheid variable stars in distant galaxies to discover that the galaxies are receding in every direction at speeds (relative to the Earth) directly proportional to their distance. This fact is now known as Hubble's law (see Edwin Hubble: Mariner of the Nebulae by Edward Christianson).
Given the cosmological principle, receding galaxies suggested two opposing possibilities. One, advocated and developed by George Gamow, was that the universe emerged from an extremely hot, dense state a finite time in the past, and has been expanding ever since. The other possibility was Fred Hoyle's steady state model in which new matter would be created as the galaxies moved away from each other. In this model, the universe is roughly the same at any point in time. For a number of years the support for these theories was evenly divided.
In the intervening years, the observational evidence supported the idea that the universe evolved from a hot dense state. Since the discovery of the cosmic microwave background in 1965 it has been regarded as the best theory of the origin and evolution of the cosmos. Before the late 1960s, many cosmologists thought the infinitely dense singularity found in Friedmann's cosmological model was a mathematical over-idealization, and that the universe was contracting before entering the hot dense state and starting to expand again. This is Richard Tolman's oscillating universe. In the sixties, Stephen Hawking and others demonstrated that this idea was unworkable, and the singularity is an essential feature of Einstein's gravity. This led the majority of cosmologists to accept the Big Bang, in which the universe we observe began a finite time ago.
Virtually all theoretical work in cosmology now involves extensions and refinements to the basic Big Bang theory. Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding what happened at the Big Bang, and reconciling observations with the basic theory.
Huge advances in Big Bang cosmology were made in the late 1990s and the early 21st century as a result of major advances in telescope technology in combination with large amounts of satellite data such as that from COBE, the Hubble space telescope and WMAP. These data have allowed cosmologists to calculate many of the parameters of the Big Bang to a new level of precision and led to the unexpected discovery that the expansion of the universe appears to be accelerating.
See also: Timeline of cosmology
Based on measurements of the expansion of the universe using Type Ia supernovae, measurements of the lumpiness of the cosmic microwave background, and measurements of the correlation function of galaxies, the universe has a measured age of 13.7 ± 0.2 billion years. The fact that these three independent measurements are consistent is considered strong evidence for the so-called concordance model that describes the detail nature of the contents of the universe.
The early universe was filled homogeneously and isotropically with a very high energy density. Approximately 10-35 seconds after the Planck epoch, the universe expanded exponentially during a period called cosmic inflation. After inflation stopped, the material components of the universe were in the form of a quark-gluon plasma where the constituent particles were all moving relativistically. By an as yet unknown process, baryogenesis occurred producing the observed asymmetry between matter and antimatter. As the universe grew in size, the temperature dropped, leading to further symmetry breaking processes that manifested themselves as the known forces of physics, elementary particles, and later allowed for the formation of the universe's hydrogen and helium atoms in a process called Big Bang nucleosynthesis. As the universe cooled, matter gradually stopped moving relativistically and its rest mass energy density came to gravitationally dominate over radiation. After about 300,000 years the radiation decoupled from the atoms and continued through space largely unimpeded. This relic radiation is the cosmic microwave background.
Over time, the slightly denser regions of the nearly uniformly distributed matter gravitationally grew into even denser regions, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process are dependent on the amount and type of matter in the universe. The three possible types are known as cold dark matter, hot dark matter, and baryonic matter. The best measurements available (from WMAP) show that the dominant form of matter in the universe is in the form of cold dark matter. The other two types of matter make up less than 20% of the matter in the universe.
The universe today appears to be dominated by a mysterious form of energy known as dark energy. Approximately 70% of the total energy density of today's universe is in this form. This component of the universe's composition has the property of causing the expansion of the universe to deviate from a linear velocity-distance relationship by causing spacetime to expand faster than expected at very large distances. Dark energy takes the form of a cosmological constant term in Einstein's field equations of general relativity, but the details of its equation of state and relationship with the standard model of particle physics continue to be investigated both observationally and theoretically.
See also: Timeline of the Big Bang
As it stands today, the Big Bang is dependent on three assumptions:
When first developed, these ideas were simply taken as postulates, but today there are efforts underway to test each of them. The universality of physical laws has been tested to the level that the largest deviation of physical constants over the age of the universe can be is of order 10-5. The isotropy of the universe that defines the Cosmological Principle has been tested to a level of 10-5 and the universe has been measured to be homogenous on the largest scales to the 10% level. There are efforts currently underway to test the Copernican Principle by means of looking at the interaction of clusters of galaxies and the CMB through the Sunyaev-Zeldovich Effect to a level of 1% accuracy.
The Big Bang theory uses Weyl's postulate to unambiguously measure time at any point as the "time since the Planck epoch". Measurements in this system rely on conformal coordinates in which so-called comoving distances and conformal times remove the expansion of the universe from consideration of spacetime measurements. In such a coordinate system, objects moving with the cosmological flow are always the same comoving distance away and the horizon or limit of the universe is set by the conformal time.
The Big Bang is therefore not an explosion of matter moving outward to fill an empty universe; it is spacetime itself that is expanding. It is this expansion that causes the physical distance between any two fixed points in our universe to increase. Objects that are bound together (for example, by gravity) do not expand with spacetime's expansion because the physical laws that govern them are assumed to be uniform and independent of the metric expansion. Moreover, the expansion of the universe on today's local scales is so small that any dependence of physical laws on the expansion is unmeasurable by current techniques.
It is generally stated that there are three observational pillars that support the Big Bang theory of cosmology. These are the Hubble-type expansion seen in the redshifts of galaxies, the detailed measurements of the cosmic microwave background, and the abundance of light elements. Additionally, the observed correlation function of large scale structure in the universe fits well with standard Big Bang theory.
Hubble law expansion
Observations of distant galaxies and quasars show that these objects are redshifted, meaning that the light emitted from them has been proportionately shifted to longer wavelengths. This is seen by taking a spectrum of the objects and then matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the elements interacting with the radiation. From this analysis, a measured redshift can be determined which is explained by a recessional velocity corresponding to a Doppler shift for the radiation. When the recessional velocities are plotted against the distances to the objects, a linear relationship, known as the Hubble Law, is observed:
v = H0 D
Cosmic microwave background radiation
Main article: Cosmic microwave background radiation
One feature of the Big Bang theory was the prediction of the cosmic microwave background radiation or CMB. As the early universe cooled off due to the expansion, the universe's temperature would fall below 3000 K. Above this temperature, electrons and protons are separate, making the universe opaque to light. Below 3000 K, atoms form, allowing light to pass freely through the gas of the universe. This is known as photon decoupling.
The radiation from this region will travel unimpeded for the remainder of the lifetime of the universe, becoming redshifted because of the Hubble expansion. This results in a redshift of the uniformly distributed blackbody spectrum of the 3000 K to 3 K. It is observed at every point in the universe to come from all directions of space.
In 1964, Arno Penzias and Robert Wilson, while conducting a series of diagnostic observations using a new microwave receiver owned by Bell Laboratories, discovered the cosmic background radiation. Their discovery provided substantial confirmation of the general CMB predictions, and pitched the balance of opinion in favor of the Big Bang hypothesis. Penzias and Wilson were awarded the Nobel Prize for their discovery.
In 1989, NASA launched the Cosmic Background Explorer satellite (COBE), and the initial findings, released in 1990, were consistent with the Big Bang theory's predictions regarding CMB, finding a local residual temperature of 2.726 K and determining that the CMB was isotropic to an accuracy of 10-5. During the 1990s, CMB data was studied further to see if small anisotropies predicted by the Big Bang theory would be observed. They were found in 2000 by the Boomerang experiment.
In early 2003 the results of the Wilkinson Microwave Anisotropy satellite (WMAP) were analyzed, giving the most accurate cosmological values we have to date. This satellite also disproved several specific inflationary models, but the results were consistent with the inflation theory in general.
Abundance of primordial elements
Main article: Big Bang nucleosynthesis
Using the Big Bang model it is possible to calculate the concentration of helium-4, helium-3, deuterium and lithium-7 in the universe. All the abundances depend on a single parameter, the ratio of photons to baryons. The abundances predicted are about 25 percent for 4He, a 2H/H ratio of about 10-3, a 3He/H of about 10-4 and a 7Li/H abundance of about 10-9.
Measurements of primordial abundances for all four isotopes are consistent with a unique value of that parameter and the fact that the measured abundances are in the same range as the predicted ones is considered strong evidence for the Big Bang. There is no obvious reason outside of the Big Bang that, for example, the universe should have more helium than deuterium or more deuterium than 3He.
Galactic evolution and quasar distribution
The details of the distribution of galaxies and quasars are both constraints and confirmations of current theory. The finite age of the universe at earlier times means that galaxy evolution is closely tied to the cosmology of the universe. The types and distribution of galaxies appears to change markedly over time, evolving by means of the Boltzmann Equation. Observations reveal a time-dependent relationship of the galaxy and quasar distributions, star formation histories, and the type and size of the largest-scale structures in the universe (superclusters). These observations are in statistical agreement with simulations. They are well explained by the Big Bang theory and help constrain model parameters.
Historically, a number of problems have arisen within the Big Bang theory. Some of them are today mainly of historical interest, and have been avoided either through modifications to the theory or as the result of better observations. Other issues, such as the cuspy halo problem and the dwarf galaxy problem of cold dark matter, are not considered to be fatal as they can be addressed through refinements of the theory. Some detractors of the Big Bang cite these problems as ad hoc modifications and addenda to the theory. Most often attacked are the parts of standard cosmology that include dark matter, dark energy, and cosmic inflation. These are strongly suggested by observations of the cosmic microwave background, large scale structure and type IA supernovae, but remain at the frontiers of inquiry in physics. There is not yet a consensus on the particle physics origin of dark matter, dark energy and inflation. While their gravitational effects are understood observationally and theoretically, they have not yet been incorporated into the standard model of particle physics in an accepted way.
There are a small number of proponents of non-standard cosmologies who believe that there was no Big Bang at all. While some aspects of standard cosmology are inadequately explained in the standard model, most physicists accept that the close agreement between Big Bang theory and observation have firmly established all the basic parts of the theory.
What follows is a short list of standard Big Bang "problems" and puzzles:
The horizon problem
The horizon problem results from the premise that information cannot travel faster than light, and hence two regions of space which are separated by a greater distance than the speed of light multiplied by the age of the universe cannot be in causal contact. The observed isotropy of the cosmic microwave background (CMB) is problematic in this regard, because the horizon size at that time corresponds to a size that is about 2 degrees on the sky. If the universe has had the same expansion history since the Planck epoch, there is no mechanism to allow for these regions to have the same temperature.
This apparent inconsistency is resolved by inflationary theory in which a homogeneous and isotropic scalar energy field dominates the universe at a time 10-35 seconds after the Planck epoch. During inflation, the universe undergoes exponential expansion, and regions in causal contact expand past each other's horizons. Heisenberg's uncertainty principle predicts that there would be quantum thermal fluctuations during the inflationary phase, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in the universe. After inflation, the universe expands by means of a Hubble Law, and regions that were out of causal contact come back into the horizon. This explains the observed isotropy of the CMB. Inflation predicted that the primordial fluctuations are nearly scale invariant and Gaussian which has been accurately confirmed by measurements of the CMB.
The flatness problem is an observational problem that results from considerations of the geometry associated with Friedmann-Lemaître-Robertson-Walker metric. In general, the universe can have three different kinds of geometries: hyperbolic geometry, Euclidean geometry, or elliptic geometry. Each one of these geometries is tied directly to the critical density of the universe, the hyperbolic corresponding to less than the critical density, elliptic corresponding to greater than the critical density, and Euclidean corresponding to exactly equal to the critical density. The universe is measured to be required to be within one part in 1015 of the critical density in its earliest stages. Any deviation more than that would have caused either a Heat Death or a Big Crunch and the universe would not exist as it does today.
The resolution to this problem is again offered by inflationary theory. During the inflationary period, spacetime expanded to such an extent that any residual curvature associated with it would have been completely smoothed out to a high degree of precision. Thus, the universe is driven to be flat by inflation.
The magnetic monopole problem was an objection that was raised in the late 1970s. Grand unification theories predicted point defects in space that would manifest as magnetic monopoles, and the density of these monopoles was much higher than what could be accounted for. This problem is also resolvable by the addition of cosmic inflation which removes all point defects from the observable universe in the same way that the geometry is driven to flatness.
During the 1970s and 1980s various observations (notably of galactic rotation curves) showed that there was not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is non-baryonic dark matter. In addition, assuming that the universe was mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe is far less lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter was initially controversial, it is now a widely accepted part of standard cosmology due to observations in the anisotropies in the CMB, galaxy cluster velocity dispersions, large scale structure distributions, gravitational lensing studies, and x-ray measurements from galaxy clusters. Dark matter particles have only been detected through their gravitational signatures, and have not yet been observed in laboratories. However, there are many particle physics candidates for dark matter, and several projects to detect them are underway.
In the 1990s, detailed measurements of the mass density of the universe revealed a value that was 30% that of the critical density. For the universe to be flat, as is indicated by measurements of the cosmic microwave background, this would have meant that fully 70% of the energy density of the universe was left unaccounted for. Measurements of Type Ia supernovae reveal that the universe is undergoing a non-linear acceleration of the Hubble Law expansion of the universe. General relativity requires that this additional 70% be made up by an energy component with large negative pressure. The nature of the so-called dark energy remains one of the great mysteries of the Big Bang. Possible candidates include a scalar cosmological constant and quintessence. Observations to help understand this are ongoing.
Globular cluster age
A certain set of observations were made in the mid-1990s involving the ages of globular clusters that were found to be inconsistent with the Big Bang. Computer simulations that matched the observations of the stellar populations of globular clusters suggested that they were about 15 billion years old, which conflicted with the 13.7 billion year age of the universe. This issue was generally resolved in the late 1990s with other new computer simulations which included the effects of mass loss due to stellar winds indicated a much younger age for globular clusters. There still remain some questions as to how accurately the ages of the clusters are measured, but it is clear that these objects are some of the oldest in the universe.
The future according to the Big Bang theory
In the past, before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe is above the critical density, then the universe would reach a maximum size and begin to collapse in a Big Crunch. In this scenario, the universe would become denser and hotter again, ending with a state that was similar to that in which it started. Alternatively, if the mass density in the universe were equal to or below the critical density, the expansion would slow down, but never stop. New star formation would cease as the universe grows less dense. The average temperature of the Universe would asymptotically approach absolute zero. Black holes would evaporate. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario also known as heat death. Moreover, if proton decay exists, then hydrogen, the predominant form of baryonic matter in the universe today, would disappear, leaving only radiation.
Modern observations of accelerated expansion have led cosmologists to the Lambda-CDM model of the universe. This model contains dark energy, in the form of a cosmological constant. This energy causes more and more of the presently visible universe to pass beyond our horizon and out of contact with us. It is not known what will happen after this. The cosmological constant theory suggests that only gravitationally bound systems, such as galaxies, would remain together, and they too would be subject to heat death, as the universe cools and expands. Other so-called phantom energy theories suggest that ultimately galaxy clusters and eventually galaxies themselves will be torn apart by the ever increasing expansion in a so-called Big Rip.
See also Ultimate fate of the universe.
Speculative physics beyond the Big Bang
There remains the possibility that the Big Bang will be developed in the future and in particular, we might learn something about inflation or whatever came immediately before the Big Bang. It might be the case that there are parts of the universe well beyond what can be observed in principle. In the case of inflation this is required: exponential expansion has pushed large regions of space beyond our observable horizon. It may be possible to deduce what happened when we better understand physics at very high energy scales. Speculations about this tend to involve theories of quantum gravity.
Some proposals are:
- chaotic inflation
- brane cosmology models, including the ekpyrotic model in which the Big Bang is the result of a collision between branes
- oscillatory universe which holds that the early universe's hot, dense state matches on to a contracting universe similar to ours. This yields a universe with an infinite number of big bangs and big crunches. The cyclic extension of the ekpyrotic model is a modern version of such a scenario.
- models including the Hartle-Hawking boundary condition in which the whole of space-time is finite
Some of these scenarios are qualitatively compatible with one another. Each involves untested hypotheses.
Philosophical and religious interpretations
Philosophically, there are a number of interpretations of the Big Bang theory that are entirely speculative or extra-scientific. Some of these ideas purport to explain the cause of the Big Bang itself (first cause), and have been criticized by some naturalist philosophers as being modern creation myths. Some people believe that the Big Bang theory lends support to traditional views of creation, for example as given in Genesis, while others believe that all Big Bang theories are inconsistent with such views.
The Big Bang as a scientific theory is not associated with any religion. While certain fundamentalist interpretations of religions conflict with the history of the universe as put forth by the Big Bang, there are also more liberal interpretations that do not.
The following is a list of various religious interpretations of the Big Bang theory:
- A number of Christian apologists, and the Roman Catholic Church in particular, have accepted the Big Bang as a description of the origin of the universe, interpreting it to allow for a philosophical first cause.
- Students of Kabbalah, deism and other non-anthropomorphic faiths concord with the Big Bang theory, notably the theory of "divine retraction" (Tzimtzum), as explained by Jewish Scholar Moses Maimonides. Similarly, Pandeists, who believe that an initially sentient God designed and then transformed himself into the non-sentient universe, often identify the Big Bang as the moment of transformation.
- Some modern Islamic scholars believe that the Qur'an parallels the Big Bang in its account of creation, described as follows: "the heavens and the earth were joined together as one unit, before We clove them asunder" (21:30). The Qur'an also appears to describe an expanding universe: "The heavens, We have built them with power. And verily, We are expanding it" (51:47).
- Certain theistic branches of Hinduism, such as the Vaishnava-traditions, conceive of a theory of creation with similarities to the theory of the Big Bang. The Hindu-mythos, narrated for example in the third book of the Bhagavata Purana (primarily, chapters 10 and 26), describes a primordial state which bursts forth as the Great Vishnu glances over it, transforming into the active state of the sum-total of matter ("prakriti").
- Buddhism has a concept of a universe that has no creation event per se. The Big Bang, however, is not seen to be in conflict with this since there are ways to get an eternal universe within the paradigm. A number of popular Zen philosophers were intrigued, in particular, by the concept of the oscillating universe.
- The future according to Big Bang theory
- Cosmology, astrophysics and astronomy
- A Brief History of Time
- Magnitude order
- Primordial black hole
- Stellar population
- Theoretical astrophysics
- History of astronomy
- Supermassive black hole
- Physics topics
- Cosmic microwave background radiation
- Timeline of cosmic microwave background astronomy
- Blackbody spectrum
- Cosmic variance
- Integrated Sachs Wolfe effect
- Spherical harmonics
- Sachs-Wolfe effect
- Observational experiments
- Hubble Space Telescope
- Cosmic Background Explorer (COBE)
- Far Ultraviolet Spectroscopic Explorer (FUSE)
- Gamma-ray Large Area Space Telescope
- Wilkinson Microwave Anisotropy Probe (WMAP)
- Atomic chemical elements
- List of astronomical topics
- List of famous experiments
- List of time periods
- Timeline of the Universe
- Big Bang used in other Fiction
External links and references
Big Bang overviews
- LaRocco, Chris and Blair Rothstein, "THE BIG BANG: It sure was BIG!!".
- Open Directory Project: Cosmology
- PBS.org, "From the Big Bang to the End of the Universe. The Mysteries of Deep Space Timeline"
- "Welcome to the History of the Universe". Penny Press Ltd.
- Shestople, Paul, "Big Bang Primer".
- Wright, Edward L., "Brief History of the Universe".
Beyond the Big Bang
- Cambridge University Cosmology, "The Hot Big Bang Model".
- Smithsonian Institution, "UNIVERSE! - The Big Bang and what came before".
- Whitehouse, David, "Before the Big Bang". BBC News. April 10, 2001.
- D'Agnese, Joseph, "The last Big Bang man left standing, physicist Ralph Alpher devised Big Bang Theory of universe". Discover, July, 1999.
- Felder, Gary, "The Expanding Universe".
- Links to sample text and reviews: Big Bang by Simon Singh
- John C Mather and John Boslough 1996, The very first light : the true inside story of the scientific journey back to the dawn of the universe. ISBN 0-465-01575-1 p.300: LeMaitre, Annals of the Scientific Society of Brussels 47A (1927):41 - GRT implies universe had to be expanding. But Einstein brushed him off in the same year. LeMaitre's note was translated in Monthly Notice of the Royal Astronomical Society (1931):483-490.
- See also LeMaitre, Nature 128(1931) suppl.:704. with a reference to the primeval atom.
- See review article by Ralph Alpher and Robert Herman Physics Today Aug 1988 pp24-34 which references
- Alpher 1948 Phys Rev D 74,1737
- Alpher and Herman 1948 Phys Rev D 74,1577
- Alpher Herman and Gamow 1948 Nature 162,774
Most scientific papers about cosmology are initially released as preprints on arxiv.org. They are generally quite technical, but sometimes have introductions in plain English. The most relevant archives, which cover experiment and theory, are the astrophysics archive, where papers closely grounded in observations are released, and the general relativity and quantum cosmology archive, which covers more speculative ground. Papers of interest to cosmologists also frequently appear on the high energy phenomenology and high energy theory archives.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/The_Big_Bang | 13 |
72 | Teacher professional development and classroom resources across the curriculum
Teacher professional development and classroom resources across the curriculum
Previously in our discussion, we have seen how the tones generated by different instruments are really mixtures of some fundamental vibration, or oscillation, and whole number multiples of that frequency, called overtones. The various combinations of fundamental tones and overtones are what give instruments their characteristic sounds. This understanding began with the Pythagorean observation that strings with commensurable lengths sound harmonious when plucked together. We've progressed from understanding the relations of string lengths to understanding how waves work and how the frequencies of waves are what we perceive as pitch. We've also seen how we can express simple sine and cosine waves as periodic functions of time via a connection to trigonometry. In essence, we have learned that musical tones are complicated mixtures of waves, and we now know how to express simple waves mathematically. We are now ready to use our mathematical tools to tackle complicated waves, such as the tones that real instruments make. To do this, we need some concepts and tools from an area of study that, when it began, had nothing to do with music, but rather heat: Fourier analysis.
Joseph Fourier was an associate of Napoleon, accompanying the great general on his conquest of Egypt. In return for his loyalty, Fourier was made governor of southern Egypt, where he became obsessed with the properties of heat. He studied heat flow and, in particular, the temporal and spatial variation in temperature on the earth. He realized that the rotation of the earth about its axis meant that its surface was heated in some uneven, but periodic way. In reconciling the different cycles involved in the heating of our planet, Fourier hit upon the idea that combinations of cycles could be used to describe all kinds of phenomena.
Fourier said that any function can be represented mathematically as a combination of basic periodic functions, sine waves and cosine waves. To create any complicated function, one need only add together basic waves of differing frequency, amplitude, and phase. In music, this means that we can theoretically make any tone of any timbre if we know which waves to use and in which relative amounts to use them. It's not unlike making a meal from a recipe—you need a list of ingredients, you need to know how much of each ingredient to use, and you need to know how and in what order to combine them.
The ingredients used in Fourier analysis are simply sine and cosine waves. Of course, these simple waves can come in different frequencies. For sounds that we consider pleasing and musical, the sine wave mostly will come in frequencies that are whole number multiples of a fundamental frequency. For sounds that are "noisy," such as white noise, the sine-wave ingredient frequencies can be anything.
NOTE: In the following discussion, we'll be using the shorthand terms "sin" and "cos" to represent "sine" and "cosine," respectively.
To begin, let's look at a simple example, sin t:
Now, consider a modified sine function, sin 2t:
Combining these two functions gives us a new waveform, f(t) = sin t + sin 2t.
This waveform is comprised of equal parts sin t and sin 2t. It has features of both but is a new waveform. We don't have to combine the two simple waves in equal parts, however. Let's look at what happens when we use only "half as much" sin 2t:
Just as we find when cooking, using different proportions of the same ingredients yields a different result. This waveform is different than the one we obtained previously, illustrating the effect that altering the coefficient of a function can have on the graph, or wave. The coefficient corresponds to the amplitude of a wave, and, in our combined function, essentially determines how much each sine term contributes to the final waveform.
Now let's see what happens when one of the terms is offset in phase.
This produces yet another waveform, illustrating the effect of each component wave's phase. Notice that the graphs of sin (t+) and cos t are identical. This shows us the natural phase relation between sine and cosine functions. Now that we've seen how simple sine waves can be combined to create somewhat more complex waves, let's see how to make a more complicated wave, such as a sawtooth wave.
First, let's just look at the sawtooth waveform.
Notice that the graph has a series of "ramps" that indicate that the function increases at some constant rate, then instantaneously drops to its minimum value as soon as it reaches its maximum value. Each of the ramps looks like the function y = x, which we can express as f(t) = t, given that we have been talking about values relative to time. So, this sawtooth wave can be made by some sort of function that periodically looks like f(t) = t. It has a period of 2π, so we can say that this function is f(t) = t for –π to π. According to Fourier, even a function such as this can be written as the sum of sines and cosines.
To see this, let's start with a sine wave of period 2π, a period equivalent to that of the sawtooth wave above.
Now, let's subtract another sine wave of twice the original frequency.
The equation that represents the function we've built so far is:
f(t) = 2sin t – sin 2t
Let's add a third sine wave of three times the original frequency.
With the addition of the third term, our Fourier expansion is now:
2sin t – sin 2t + sin 3t
At this point we are just guessing which frequencies and amplitudes, or coefficients, to use. Fourier's great contribution was in establishing a general method, using the techniques of integral calculus, to find both the coefficients, and by extension, the component frequencies of the expansion of any function. This, as we shall soon see, has given mathematicians a greater range of manipulative capabilities with functions that are difficult to deal with in their standard form. Fourier's specific method is beyond our scope in this text, but the idea that certain functions can be represented as specific mixtures of sine and cosine waves, is an important one.
Returning to our sawtooth exercise, we can see that as we add more terms, the resultant wave begins to take on the sawtooth shape.
Four terms: f(t) = 2sin t – sin 2t + sin 3t – sin4t
Five terms: f(t) = 2sin t – sin 2t + sin 3t – sin4t + sin5t
Six terms: f(t) = 2sin t – sin 2t + sin 3t – sin 4t + sin 5t – sin 6t
Seven terms: f(t) = 2sin t – sin 2t + sin 3t – sin 4t + sin 5t – sin 6t + sin 7t
As you can see, the sum of the sine series is starting to look like a sawtooth wave. In order for it to look exactly like one, however, will require an infinite number of terms. To suggest an infinite sum, we often use the "dots" convention, as in this equation:
F(t) = 2sin t – sin 2t + sin 3t -… + bnsin nt
The dots indicate that the established pattern goes on and on. However, there is a more precise way to represent this sum (or more confusing, depending on your point of view!). This is called the "summation notation:"
This representation encodes the fact that the index "n" starts at 1 and keeps on going, and that for every index n there is a coefficient bn that is the "weight" on the mode sin nt (of frequency 2πn). So, the bn's are the amplitudes of the component frequencies, and in the case of the sawtooth wave, we can express them by the formula . We find this by using Fourier's technique for finding expansion coefficients (i.e., by computing an integral). The details of this, although outside the scope of this text, can be found in most standard calculus textbooks.
The final Fourier expansion of the sawtooth wave is then:
In the Fourier series for this sawtooth wave, note that there are no cosine terms. That's because all of the coefficients that would correspond to cosines are zero. In general, a Fourier series expansion is composed of contributions from sine terms, sin nt (with amplitudes bn), cosine terms, cos nt (with amplitudes an), and a constant offset, or bias, a0. So, in summation notation the general formula for a Fourier expansion of a function, f(t), is:
Notice in the progression that we constructed earlier that as the number of component waves increases, the overall waveform increasingly approaches the look of the ideal sawtooth. Each additional term has a higher frequency than the preceding term and, thus, provides more detail than the term before it. We can get as close as we want to the form of the ideal sawtooth by adding as many high-frequency components as we choose. This is analogous to a sculptor roughing out a general shape and then refining details after multiple passes.
Being able to take any function and express it in terms of these fundamental pieces is an extremely useful tool. In mathematics, functions that may otherwise seem impenetrable may give up their secrets when transformed into a Fourier series. In the realm of music, Fourier analysis gives musicians and sound engineers extraordinary control over sound. They can choose to augment or attenuate specific frequencies in order to make their instruments sound perfect. Also, with today's synthesizers, musicians can build up fantastic sounds from scratch by playing with different combinations of sines and cosines.
As we have seen, Fourier analysis can be used to represent a sound, or any signal, in the frequency domain. This view of a wave in terms of the specific mixture of fundamental frequencies that are present is often called a signal's spectrum. Analyzing the spectra of different signals can yield some surprising information about the source of the signals. For example, by looking at the light from stars and identifying the presence or absence of specific frequencies, astronomers can make extremely detailed predictions about the chemical composition of the visible layers of the star. In audio engineering, technicians can monitor the frequencies present in a sound and then amplify or attenuate specific frequency bands in order to control the makeup and quality of the output sound.
Each sine or cosine term in a Fourier expansion represents a specific frequency component. We can graph these frequencies in a histogram in which each band represents a range of frequencies. The height of each band corresponds to the amplitude of the contribution of those frequencies to the overall signal. This visual representation of sound may be familiar to you if you've ever used a graphic equalizer.
Using the "sliders" of a graphic equalizer, one can adjust the amplitude of the contribution of each frequency range to the overall sound. This makes it possible to change the "color" of the sound coming out of the system. Boosting low frequencies increases the bass tones and "richness" but can make the sound "muddy." Boosting higher frequencies improves the clarity but can make the sound seem "thin." The more sliders you have, the more precisely you can sculpt the sound produced.
Taking a natural sound and breaking it up into its component frequencies may seem like a daunting task. Computers are quite good at it, but they are by no means the only way of accomplishing the feat. In fact, the human ear does something like this to help us distinguish one kind of sound from another.
The basilar membrane in your ear is formed in such a way that sounds of different frequencies cause different areas to vibrate, more or less going from low to high as you progress from one end of the membrane to the other. Tiny hairs on this membrane, corresponding roughly to frequency bands, "pick up" the relative amplitudes of the components of the tones you hear and relay this information to the brain. The auditory processing part of your brain translates the information into what we perceive as tones. Our ears and brains naturally do a Fourier analysis of all incoming sounds!
In addition to helping us to distinguish the sounds of music, Fourier analysis has broad application in many other fields, as well. Its signal-processing capabilities are of use to scientists studying earthquakes, electronics, wireless communication, and a whole host of other applications. Any field that involves looking at or using signals to convey information, which covers a pretty broad swath of modern endeavors in science and business, uses Fourier analysis in some way or another.
Up until this point, we have been concerned with simple, one-dimensional waves, such as those evident in a cross-section of the ripples on a pond. However, a more realistic, complete analysis would have to involve the vibrations of the entire surface of the water—in three dimensions. In the realm of sound, we're now moving from the vibration of a string to a musical surface—such as a drum! | http://www.learner.org/courses/mathilluminated/units/10/textbook/05.php | 13 |
78 | Healthy forests are important to us for many reasons. Forest scientists must collect lots of data to keep track of what is happening in our forests. In this module, students learn about the area and distribution of forests in the United States and why it is important to measure and monitor forests. One important measurement is DBH (Diameter at Breast Height), a measurement of tree trunk diameter. Scientists buy special DBH measuring tapes from forestry supply companies that are calibrated to use the tree trunk's circumference to determine its diameter through the relationship C = πD. Students learn about this relationship, then create their own DBH tapes and test them out on tree cookies. In another activity, students learn about stand density and then, pretending their classroom is a forest of people-trees, they apply this by measuring the stand density of their class "forest." The following activity is adapted from Biology in a Box.
Grade Level: 3rdth – 6th grade
Subject Matter: Life Science
Three feet of plastic flagging tape (sold in rolls at most hardware and home improvement stores, one roll of 200 feet is about $2)
Tree cookies (if not available, anything round will work, or take your students outside and try on real trees!)
Biomass: mass of living material per unit area
DBH (Diameter at Breast Height): measurement of tree trunk diameter
Dendrologists: people who studies trees and woody plants
Crown: all of the branches of a tree
C=πD: circumference equals pi times diameter
Stand density: the number of trees per unit area.
What to Do
1. Using data from the FAO (Food and Agriculture Organization, part of the United Nations), ask students to calculate what percent of the United States’ land is forested. It is fun to have students guess what the percentage may be before they calculate it. Then the class can work together to find the percent, taking the area forested (in hectares), dividing it by the total area of the U.S., then multiplying by 100%. Is it more or less than they thought?
2. Next, ask students to think about what parts of the United States are forested, then shade in a map with their best guess. They can start by thinking about their own home state, places that they may have traveled or where they have family, or places that they have heard about. Feel free to ask some leading questions, such as: do you think there are a lot of forests in the Southwest? (This area has a lot of desert). What about the Midwest? (This area was once covered with prairie, but is now covered in farmland). What about up North? (This area is known for its beautiful boreal forests.) What about the southeast? (Forestry is big industry here). Also, students can take into consideration that earlier they calculated that the total area covered was about one third of the country.
3. Once the answer map is shown, the class may discuss how well their guesses match up to the real distribution of forests throughout the country. Are they surprised by any of it? Does anyone want to tell about a forest that they may have visited somewhere on that map? This map is especially interesting because it also shows the distribution of biomass, or mass of living material per unit area. The places colored in the darkest have the greatest density of biomass, the densest and most productive forests. Where are those areas? (Note that they are areas that get a lot of moisture and rain, due to their positions near coastlines or mountain ranges which trap rainclouds. In the Pacific Northwest there is actually rainforest!).
4. Now that we have an idea of how much forest is in the U.S., ask students why is it important to measure and monitor forests? Forests are important natural resources for our economy. Timber and pulp harvesting is a big industry in our country. We need to make sure we manage this resource so that it continues to provide materials we need for future generations. Forests are also important for recreation. As a society we value forests for their beauty and the opportunity to go out and play in them – hike, camp, even just drive through and admire the scenery. We also need to monitor forests for the sake of fire management. For years our society suppressed forest fires believing this was good for both the forest and for our property near them. But this led to a huge problem, as years of dead timber and brush piled up on the forest floors. When fires finally broke out, they were hard to control. Now foresters understand that occasional small fires are an important part of maintaining a healthy forest. And in areas where forest fires can’t be controlled, brush and dead trees are removed or turned into mulch mechanically. Monitoring forests helps us to predict what areas are at risk for bad fires.
There are also several ecological reasons why we’re interested in monitoring and measuring forests. One is for wildlife habitat, protecting the biodiversity of all kinds of wildlife, insects and migrating birds that depend on forests, not to mention the diversity of plants, mushrooms, and other life found there! Another is that forests are important to air quality. Trees with large leaf-surface areas absorb nitrous oxides, sulphur dioxide, carbon monoxide and ground-level ozone, which contribute to air pollution. Also, on the other hand, our air pollution damages our forests, and it’s important to try to understand and lessen this damage. And finally, trees are important to the global carbon cycle, and therefore affect our predictions of global climate change. Deforestation has been a major contributor to global warming. If you look at the famous Keeling Curve depiction of the concentration of carbon dioxide in the atmosphere over the years, you see that the concentration of carbon dioxide is rising steadily. But if you look a little closer, you see the impact of plants on this “curve” -- its jaggedness is affected by the seasons in the northern hemisphere, as in the summer plants take up lots of CO2, and in the fall and winter many drop their leaves and die back, releasing the CO2 back into the atmosphere. This data was taken from an observatory in the mountains of Hawaii.
There are many people studying forests for many different reasons like the ones above, and they go by many names. Some are called dendrologists, people who studies trees and woody plants. Others may be forest ecologists, who are particularly interested in the relationships between species in a forest or in the effects of management or pollution on the ecosystem. Tree physiologists are interested in how trees work on the inside, perhaps their health or how they work as an organism.
5. So, how do we measure a tree? What kinds of data can we collect? We see that trees come in all shapes and sizes, so how can we describe with numbers and data how they are different? One simple method is to break them down into their parts – trunk and crown (all of the branches). You can think of the trunk as being like a cylinder, and the crown like a big ellipsoid on top – not often spherical!
Foresters have special ways for measuring tall trees with a clinometer, a special tool that uses principles of trigonometry. We won’t go into that here. But based on that, foresters can measure the entire height of the tree, the height of the trunk from the base to the crown, and the height of the crown. They also take width measurements, the width of the crown can tell us how much competition the tree has from neighbors (packed close together, the tree can’t spread out its branches as much to capture sunlight). Then there is the diameter of the tree trunk, the standard for which is Diameter at Breast Height, or DBH. Foresters prefer not to cut down or bore holes in trees if they don’t have to, so they measure DBH by wrapping a tape around the circumference of the trunk. Circumference isn’t the same as diameter, yet you can use one measurement to get the other. Ask your students, how is this possible?
6. As students practice using the equation C=πD (circumference equals pi times diameter), have them pay special attention to the last question: “For every 1 inch increase in diameter, the circumference increases ~3.14 inches.” Then ask, is there a way to invent something that when you wrap it around the circumference of a tree or something else round, it automatically gives you the diameter? At this point distribute the flagging tape, markers and rulers. Have students measure and mark off increments of 3.14 inches as illustrated. Pi is actually an irrational number meaning that the numbers following the decimal point never end. It starts out 3.14159 … but here we will estimate it to be about 3 and 1/8 inches. It helps to walk around the class with an already-made tape and compare yours to theirs to make sure they are on the right track. DBH tapes, as they are called, are commercially available to foresters as a tool they use in the field, but they work exactly the same way as the ones your students make.
7. After they’ve played with their DBH tapes a bit, let them consider that this is how to measure and compare individual trees, but what is a way we can compare two forests that look very different? One such measure is stand density, or the number of trees per unit area. Students can practice calculating it from the examples given in the slideshow. Then, it’s time to calculate the “stand density” of their class. Have student volunteers measure the dimensions of the classroom to find the area of their “stand,” then have a volunteer count the number of people in the room. Use these numbers to calculate your class’s “stand density.”
All of these activities were designed for indoors, but would be twice as fun if students have the opportunity to go outside on school grounds to test out what they’ve learned on real trees. An additional fun activity would be for students to collect and tabulate DBH data on all the trees on their school grounds and perhaps use that data to practice finding the mean, median and mode.
Other Biology in a Box activities
In Biology by Numbers, learn about the ways math can solve biological problems. Produced by the National Institute for Mathematical and Biological Synthesis (NIMBioS). | http://www.sciencefriday.com/blogs/08/03/2011/measuring-a-forest.html?series=1&interest=&audience=1&author= | 13 |
53 | Darwinian Evolution Needs Randomness
Charles Darwin suggested that life developed,
and new forms of life arose, through what he called descent and
modification. He led the way in offering a mechanistic explanation
of descent as a natural phenomenon, a process that follows natural laws.
His approach motivated later attempts to explain even the actual origin
of life on a mechanistic basis.
To account for descent, Darwin strove to explain how the vast complexity of life could have arisen from some simple life form. If such an explanation could be achieved, there was hope that one could also discover how that simple living form could have arisen in a natural way from inert matter. The entire existence of life could then be accounted for as a purely natural phenomenon, not requiring supernatural intervention. Darwin's agenda, thus, called for him to find a natural process to account for descent.
Recognizing the need for a physical mechanism to account for descent, Darwin hit on the idea that many heritable variations normally exist in a population, and many new ones appear all the time. Some of these variations can affect the fitness of the organism in its environment. Organisms that are unfit will perish - only those that are fit will survive. His reading of Malthus led him to conjecture that animal populations produce far more offspring than could possibly survive, and they hover constantly on the edge of misery and starvation. Under these circumstances, Darwin saw living organisms experiencing a fierce struggle for survival from which only the fittest would emerge.
He was mistaken in this conjecture, however, because plants and animals do not hug the brink of disaster. Population size is not controlled by starvation, disease, or predation. In most animal populations mass starvation and disease are rare; they occur only as the result of extraordinary catastrophes, such as droughts or epidemics. Moreover, predators do not overexploit prey populations. Many animal colonies are known to adjust their birth rates to match the available resources. Plants also are known to adjust their seed production to the space and resources allotted to them.
Darwin suggested natural selection, acting
over a long time on the variations in a population under conditions of
a struggle for existence, could transform the population into a new species.
He admittedly did not know the source of the variation, but he knew it
was there. In the sixth edition of The Origin, Darwin was hesitant
about labeling the variations as random, even though he had earlier referred
to them as having occurred by “chance.” He explained that he used the word
“chance” only to indicate that their causes are unknown.
He did not refrain from speculating on their causes, however. He felt there were definite causes for the variations. They could be caused by environmental conditions or they could come from the use or disuse of organs. But he did not want to call them random.
In the first third of the 20th century Darwin's theory was overtaken by new discoveries in biology. By the end of the 1930s the theory was in disarray. There were unanswered riddles and the theory was in serious need of repair and updating. In 1941, at a meeting of a the Geological Society of America, a suggestion was made that geneticists join with morphologists, taxonomists, and paleontologists to try to synthesize, from the latest findings in these disciplines, a modernized and consistent version of Darwin's theory. Specialists in these fields responded to the call, and over the next few years, they developed a revised theory of evolution. They called it the modern synthetic theory of evolution. The theory gradually became known as the neo-Darwinian theory of evolution, and its framers and their followers became known as neo-Darwinians. Their agenda called for a theory that could explain the development of life in a natural way. If they could account for the development of all the present complexity of life from some sufficiently simple first organism, the way would be prepared for a theory of a fully natural account of the actual origin of life.
Neo-Darwinian theory rejected Darwin's suggestion of the environmental induction of heritable variation, and even more emphatically rejected the inheritance of acquired characteristics. Hereditary elements, known as genes, had by now been discovered, and, although their molecular structure was still unknown, the neo-Darwinians had accepted the separation of the somatic and the germ cells as suggested half a century earlier by Weismann. It seemed clear to them that neither environmental influences nor acquired characteristics could affect the germ cells, and that heritable variation could stem only from changes in the germ cells.
Unwilling to accept environmental influence as a cause of variation and unable to find a mechanism that could directly produce changes needed for descent, the neo-Darwinians rescued randomness from the rubbish heap which Darwin had relegated it, and assigned it to function as the source of the variations. Some variations are detrimental to the organism, but others may be beneficial. The neo-Darwinians hold that a heritable variation of the latter kind, even if rare. will spread by natural selection, and will eventually take over the population.
The neo-Darwinians thus built their theory on random variation, culled and directed by natural selection. They identified the heritable variations required by the theory with the mutations discovered and named by De Vries in the early 20th century. A decade after the establishment of the neo-Darwinian theory, Watson and Crick identified the heritable variation of the theory with random errors in DNA replication.
If the neo-Darwinian agenda had worked out, there would be no place for a Creator in the origin of life except to establish the laws by which the evolution had taken place. Even that position would not be an honorable one if the appearance of man were not inevitable, as Gould believes it is not.
The Motif Of Neo-Darwinism Is Incompatible With Randomness
Random variation, however, turns out to be inadequate to account for evolution, and this inadequacy calls for a reexamination of neo-Darwinian theory. There is no evidence that random variation can play a role in major evolution advances as postulated by the theory. Indeed, there is evidence to the contrary - that randomness does not, and even cannot, play such a role.
The vast majority of random changes in the genome that have any effect on the phenotype are detrimental to the organism. The mammalian genome is a sequence of about 4 billion nucleotides, and a single error in DNA replication could change one of these nucleotides into another. There are an enormous number of different sequences those nucleotides can assume, only a very small fraction of which can result in a viable phenotype, and only a smaller fraction still will have a positive selective value, yielding an improvement over an existing population.
Since DNA was discovered to hold the code of life, conventional wisdom has held that errors in copying DNA are the source of the random variation called for by the neo-Darwinian theory. These errors occur in prokaryotes with a probability of between 10-10 and 10-8 per nucleotide per replication. In eukaryotes the rate is even lower. The error rate in eukaryotes is between 10-11 and 10-9 per nucleotide per replication. These error rates are low because a special proofreading mechanism in each cell checks and corrects the replication. The rates are just below the level of intolerability to genetic damage. Some think the system may even operate with the accuracy it does so as to maintain this level. The error rates could not be much larger if a species is to survive.
Because of these low mutation rates, no more than one specific mutation of this sort can be expected in a population of 100,000 in 100,000 generations. Two specific mutations would require 1015 generations. Since there is not enough time for 1015 generations, a specific double mutation will almost never occur. The probability of a double mutation occurring, for example, in a population of 100,000 animals within 100,000 generations is only 10-10. This is to be compared with the probability of a single specific mutation occurring in the same population within the same time, which is 1/e, or about 0.37. Thus with higher organisms, such as mammals, evolution can make a specific change in the genome of no more than one nucleotide (or base pair) at a time. Moreover, even if such a mutation is beneficial, it is highly unlikely to be retained by natural selection to the point where it dominates the population. To raise this probability to 1/e one would need about 1,000 times as many generations, or about 108 generations.
If evolution is to proceed through random nucleotide substitutions, then an adaptive improvement in the genome must be possible at any stage by a change of just one base pair. A long chain of such steps requires that there be long sequences of such changes, one after the other, each leading to an adaptive improvement of the organism. Moreover, because of the low probabilities involved, such evolution requires that there be not one, but many, long potential sequences of this kind. Indeed, the number is so large that many of these paths, or portions of them, would have been observed in the many genetic experiments performed since genetics became a science.
Whenever probability calculations have been made, they have shown that random point mutations cannot account for major evolutionary change. Calculations on the evolution of the horse, as induced from the fossil record, bear this out. All calculations based on neo-Darwinian theory show that major evolutionary events are highly improbable. Events that a theory predicts to be very improbable cannot be said to be accounted for by that theory. If a theory can account for data only by declaring them highly improbable, then the theory must be rejected. This is a fundamental principle in the mathematics of hypothesis testing.
More than 30 years ago I predicted that contradictions to neo-Darwinian theory would emerge when probability calculations could be made of so-called convergences. Such calculations can now be made. From the assumed convergence of the lysozyme enzymes common to ruminating cows and ruminating langur monkeys, one can set an upper-limit probability of 1054 to the probability that they evolved independently through random point mutations. It is less than the probability of your winning the New York State Lottery seven weeks in a row. Most people would consider such an event impossible.
A striking identity has recently been found between a gene in Drosophila and a corresponding gene in many vertebrates that plays a role in the development of the eye. The same gene has been found to control eye development in insects and in vertebrates, including humans. These genes are 94% identical between Drosophila and humans. The conventional wisdom until now has been that the eye is an extraordinary example of convergence in that it has developed independently as many as three or four dozen times. This new finding makes convergence in eye evolution look so improbable that, even without making any probability calculations, one author suggests that: "the traditional view that the vertebrate eye and the compound eye of insects evolved independently has to be reconsidered."
The neo-Darwinians claim they can account for the development of life from some very simple form. They must, therefore, account for the buildup of the information found today in the genomes of mammals and birds, fish and reptiles, all the invertebrates, and all the plants that inhabit the earth. This information is supposed to have been built up from such a simple organism as a single cell. If this information was built up, as the neo-Darwinians claim, by long series of small steps of random errors in DNA replication directed by natural selection, then each of these steps had to add, on average, a small amount of information. Information had to be generated little by little to accumulate to the large amount that resides in living organisms today.
It is easy to believe that single-nucleotide
substitutions are random, but they are not known to have added any
information to the genome. They are known to have produced minor
changes, even some with selective advantage in special cases. Geneticists
have been studying mutations in the laboratory for nearly a century, and
on the molecular level for a third of a century. But they have not found
a single mutation that adds information to the genome! Yet these
are the mutations the neo-Darwinian theory calls for to produce major evolutionary
No Mutations Are Known That Add Information
Random mutations do occur, but they do not add information to the genome. Some of these mutations may, under special circumstances, even have selective value and benefit the organism. But because they do not add information, they cannot represent the typical mutations required by neo-Darwinian theory. Most of the mutations in a chain of cumulative selection must be ones that add information. If evolutionary theory is to account for the increase of information in living things from a very simple primitive form of life to the complexity we find today, then each of its component steps must, on the average, add a little information. If the theory is to account for the general buildup of information in life, then this information must be new - not just new to the genome of that organism alone, but new to the "global genome" of the entire biosphere.
I cannot prove that there are no mutations that add information. I cannot even exhibit all known mutations to show that none of them adds information. Indeed, in principle, there could be such a mutation, but it would be improbable. The best I can do is to exhibit some well-known mutations that have been said to demonstrate neo-Darwinian evolution and show that they do not add information. I shall pick two examples of mutations. The first is perhaps the most well known among nonbiologists. The second, although not well known outside professional circles, is, I think, the most dramatic of the mutations that appear to lead to evolution.
My first example is a single nucleotide substitution in the DNA of a bacterium giving it immunity to streptomycin. Most cases of acquisition of drug resistance in bacteria involve the transfer of whole genes from other microorganisms that already have the resistance. Although the acquisition of a new gene does add information to the bacterial genome, the information is not new to the biosphere. It has already existed in other organisms. This is not the case, however, when a single copying error in the DNA leads to drug resistance. Let us take a look at how a random mutation leads to the evolution of streptomycin resistance.
Bacteria normally sensitive to streptomycin can undergo a random change of one nucleotide, granting them resistance to the drug. Is this an example of new information added to the genome? Can this mutation be a prototype of the mutations called for by the neo-Darwinian theory? Can it be of the kind that produce a long sequence of evolutionary steps that together add large amounts of information to the genome? Do we have here an example of neo-Darwinian evolution in action? Does this example contradict the theoretical consideration outlined above?
The mechanism of how streptomycin stops bacterial growth has been known for some time. Mycin molecules attach to a matching site on the bacterial ribosome, preventing the correct assembly of amino acids into protein. The mycin molecule fits into the matching site on the ribosome like a key fitting into a lock, as shown schematically in Figure 1 . Because protein is incorrectly made, the cell cannot grow or replicate. Mammalian ribosomes do not have the matching site for the mycin molecule so, while the drug affects the bacterium, it does not affect its mammalian host. That is why mycin drugs are useful antibiotics.
A point mutation in the right place grants the bacterium resistance by losing information. Figure 2 shows schematically how a change in the matching site on the ribosome can prevent the mycin molecule from fitting onto the ribosome and interfering with its operation. The change makes the bacterium resistant to the drug.
As you can see from Figure 2, the change could be in any one of several places on the matching site to make the bacterium resistant. Any one of several changes in the attachment site on the ribosomal protein is enough to spoil its match with the mycin. That means that a change in any one of several DNA nucleotides in the corresponding gene can grant resistance. Indeed, several different mutations in bacteria have been found to result in streptomycin resistance. We see then that the mutation reduces the specificity of the ribosome protein, and that is a loss of genetic information. This loss of information leads to a loss of sensitivity to the drug, resulting in resistance. Since the information loss is in the gene, the effect is heritable, and a whole strain of resistant bacteria can arise from the mutant bacterium.
Although such a mutation has high selective value in the presence of streptomycin, the mutation decreases rather than increases the genetic information. It therefore cannot be typical of mutations that neo-Darwinian theory requires for macroevolution. The steps required by the neo-Darwinian theory must, on average, add information. Even though the bacterium acquires resistance to the drug, it acquires resistance not by gaining information, but by losing it. Rather than say that the bacterium gained resistance, we would more correctly say that it lost sensitivity. In genetic-information content, the mutation is a loss rather than a gain. The loss of information is, moreover, manifest in the mutant's loss of viability. In the absence of streptomycin the mutants are less viable than the wild type.
My second example is the most striking illustration I know of random mutations granting selective value to an organism. It is actually a series of three mutations in the soil bacterium Aerobacter aerogenes. Experiments with these bacteria have shown what seemed to some investigators to be an example of the basic processes of evolution -- namely, the evolution of new enzymes. Bacteria grown in culture have shown they can learn to live and grow on new substances that they originally could not use. Several experiments of this kind have been reported.
The experimenters tried to see if the bacteria could evolve an enzyme that would metabolize a nonnatural sugar, similar to their natural nutrients, but on which their repertoire of enzymes would not work. They put the bacteria under strong selection pressure. They denied them their normal pentose-sugar nutrients, ribitol or D-arabitol, and tried them instead on several artificial pentoses. When they gave them xylitol, they found that, although the wild-type bacteria could not grow, mutants appeared that could. The experimenters extracted these mutants, denoted them as X1, and grew cultures from them. They found that the X1 strain grew on xylitol, but its rate of growth was only one ninth that of the wild type on ribitol.
They isolated the X1 strain and continued it on xylitol. A new mutant appeared within the culture that could grow even faster on xylitol. The experimenters extracted the second mutant, cultured it, and named the resulting strain X2. The growth rate of X2 on xylitol was nearly 2.5 times that of X1, but still less than the rate of the wild type on ribitol.
They isolated the X2 strain from the X1 and continued it on xylitol. A third mutant appeared that grew still faster on xylitol than did the X2. They extracted the new mutant, cultured it, and called the resulting strain X3. They found that the X3 grew on xylitol about twice as fast as did X2; but its rate of growth on xylitol was still not much more than half that of the wild type on ribitol. The three mutations were all found to be single-nucleotide substitutions, and there is every indication that they were random.
These experiments show that bacteria can sometimes find other ways of getting what they need when their normal nutrients are denied them. Moreover, they did it through random single-nucleotide changes. These experiments surely looked like neo-Darwinian evolution in action. The experiments appeared to show bacteria evolving through a series of three small steps. Can this short series of steps be part of a potentially long chain of steps leading to cumulative selection? Can these three steps, performed in a few months under artificial selection, serve as a model for long series of millions of steps over geological times under natural selection that might lead to macroevolution? Could these steps show the sort of evolution that primitive bacteria might have undergone? Could this be how bacteria developed their enzymes for the first time?
If we examine these experiments in detail, we see no new information entering the genome. Each of the three mutations made a gene less specific and lost information. Therefore, none of them can serve as a prototype for the small steps that neo-Darwinian theory says lead to macroevolution. None of the above experiments show the evolution of enzymes -- enzymes cannot be built by mutations such as these.
The wild type of Aerobacter aerogenes normally feeds on ribitol. The cell takes in ribitol from the outside and breaks it down in a series of steps, using a special enzyme for each step. The first of these enzymes is ribitol dehydrogenase (RDH).
Ribitol is a pentose sugar normally found in the soil. Xylitol, on the other hand, is a pentose sugar not found in nature, but its structure is similar to ribitol. Xylitol and ribitol are made up of the same atoms in almost the same arrangement. The difference between them is slight, yet the cell's RDH enzyme is specific to ribitol and discriminates against xylitol as well as other substrates. Figure 3 shows the structures of ribitol, xylitol, and another nonnatural sugar residue, L-arabitol (which I shall get to shortly). The figure does not show the three-dimensional arrangement of the atoms, but it does give an idea of how small the differences are between these sugars. Because the two sugars are so much alike, the same RDH enzyme that works on ribitol works also on xylitol, but with less activity. RDH hydrolyzes xylitol to make the same product it makes from ribitol. After this one step, all other steps in the metabolism of ribitol and xylitol are identical and are effected by the same enzymes. But, because the RDH is highly specific to ribitol, it works but poorly on xylitol.
The genes responsible for the metabolism of ribitol are turned ON only when ribitol is present. The parts of this control system relevant to our present discussion are shown in Figure 4. The gene, denoted in the figure by Y and which encodes RDH, is normally repressed, and therefore the RDH enzyme is not normally made in the cell. The presence of ribitol will induce gene Y to turn ON, leading to the synthesis of RDH. Moreover, molecules cannot easily enter a cell unless they are brought in by a special permease enzyme in the cell wall. The cell is selective about what it brings in from the outside. The permease enzyme is also not normally found in the cell until it is synthesized by the gene denoted by Z in the figure. Gene Z is normally repressed, and therefore the permease is not present. The presence of ribitol will induce gene Z to turn ON to transcribe the permease.
In summary, there are three problems that
prevent the wild-type cell from using xylitol. They are:
1. Although RDH has a small activity on xylitol, that activity is much lower than it is on ribitol.
2. Since ribitol is absent, gene Y will not turn ON and RDH will not be synthesized.
3. There is no permease enzyme to bring xylitol into the cell.
The X1 mutant partially overcame the above
problems through a point mutation in the gene that regulates the synthesis
of RDH. This regulatory gene encodes the protein that represents RDH transcription.
The mutation, whose point of effect is shown labeled (1) in Figure
4, did not change the RDH molecule itself. What it did was to disable
the repressor protein. As a result, there was no repressive control and
RDH was synthesized constitutively. The gene transcribed RDH without having
to be induced, and it did so at its maximum rate. RDH was made in such
abundance that, in spite of its low activity on xylitol, it converted enough
xylitol to allow the cell to function. The cell could function because:
1. Since the mutation disabled the repression of the gene transcription, the cell synthesized the RDH constitutively.
2. Unrepressed, the gene synthesized the RDH enzyme at its maximum rate. The large amount of RDH that was made helped compensate for its low activity on xylitol.
3. Although the cell has no transport system to admit xylitol, a small amount does enter by diffusion.
The X1 strain did not have a perfect solution to the three problems. It therefore grew much more slowly on xylitol than does the wild type on ribitol. Nevertheless, X1 can grow on xylitol alone, and the wild type cannot. But the benefit of the mutation came through a loss of information. Note that the mutation that destroyed the activity of the repressor could have been one of several mutations. It did not have to be a particular one: it was not specific.
The second step in the chain of three single-nucleotide substitutions converted the X1 strain into X2. This mutation changed the enzyme itself and raised its activity on xylitol. The point of effect of this mutation is shown as (2) in Figure 4. Because of the higher activity of the enzyme, the growth rate of X2 on xylitol was about 2.5 times that of X1. Because the mutation made the enzyme more active on xylitol, one might think the enzyme became more specific, and that genetic information is increased. But it turns out that the mutation leading to the X2 strain is just another example of a mutation making the enzyme less specific. Brian Hartley and his group at Imperial College in London studied this enzyme. They compared its activity in the X2 mutant with that of the wild-type enzyme, and they measured the activity of the two enzymes on ribitol, xylitol, and L-arabitol, another unnatural substrate. They found that, compared to the wild type, the mutant enzyme was less active on ribitol, more active on xylitol, and more active on L-arabitol.
Figure 5 presents the Hartley group's results in graphic form, showing a comparison of the reaction rates of the two forms of the enzyme for the three substrates. The enzyme in the wild type and the X1 strain is denoted in the figure by A; the enzyme in the X2 strain is denoted by B. The vertical scale shows the relative rate of catalysis. Note that the mutation transforming the X1 strain into X2 broadened the range of substrates that the enzyme could catalyze. The enzyme of the wild type (A) has a reaction-rate curve higher and narrower than that of the mutated type (B). An enzyme that is very specific would show a high and sharp plot of reaction rate. One that is less specific would be lower and broader. Figure 5 shows that the reaction rate of B is less specific than that of A. The new enzyme could accept a wider range of molecules as substrates than the old one; thus the mutation made the enzyme less specific, not more, and reduced the information in the genome.
One might have thought that if a mutation
causes an enzyme's activity to increase on a particular substrate, it must
be because the enzyme has become more specific to that substrate; but here
we see that this is not necessarily true. If an enzyme were really to become
more specific to one particular substrate, it should become not only more
active on that substrate, but it should become less active on all other
Figure 5 shows the wild-type enzyme's activity (A) with a high sharp plot, indicating high specificity, and the mutant enzyme's activity (B) with a lower and broader plot, indicating a lower specificity.
The specificity of an enzyme to its substrate is no less important to the cell than its level of activity. An enzyme that accepts any molecule as its substrate can be harmful. For an enzyme to be useful to the cell, it must limit its activity to its proper substrate.
The mutated enzyme of X2 also turns out
to be less stable than that of the wild type. Typically, when an enzyme
loses information, its function is degraded. The mutation leading to the
X2 strain is a point mutation and is indeed an example of a small random
change; it is an example of microevolution, but it cannot be typical
of a step in macroevolution. The typical step must gain some information.
The steps of macroevolution must, on average, add information to the genome.
The third mutation, which converted strain X2 into X3, improved the cell's ability to metabolize xylitol. This mutation was a single-nucleotide substitution in a regulatory gene controlling the synthesis of a permease enzyme for D-arabitol. The point of effect of this mutation is shown as (3) in Figure 6. Although the permease normally functions to transport D-arabitol into the cell, it turns out also to be able to transport xylitol. Normally the transport enzyme is not synthesized unless its gene is induced by the presence of D-arabitol. In the absence of D-arabitol a repressor protein keeps the gene OFF. So, even though the D-arabitol transport enzyme can work on xylitol, it is not normally present in the cell in the absence of D-arabitol. The mutation leading to strain X3 disabled the gene encoding a repressor protein that normally represses the gene encoding the permease. As a result the permease was synthesized constitutively. It was synthesized at the maximum rate and in large amounts without regulation. There was no need for its induction by D-arabitol.
Xylitol then gets a free ride into the
cell on the transport enzyme intended for D-arabitol. Much more xylitol
could then enter the X3 cell than could enter the X1 or X2 cell. Therefore,
X3 could grow on xylitol better than X2 could. As with the X1 and X2 mutations
before it, the mutation leading to X3 also reduced the specificity of an
enzyme and therefore caused a loss of information.
There Are Nonrandom Mutations That Can Lead To Evolution
I have indicated that it is highly improbable for random variation in the form of DNA replication errors to lead to any sizable amount of information getting into the global genome of life. I have also declared that I know of no example of mutations that have been alleged to add such information. In each case, even though the mutation is beneficial to the organism under special circumstances, I have shown that it actually leads to a loss of information from the genome. The important feature of these examples is that even though they can offer the organism a selective advantage, they cannot serve as prototypes of the mutations called for by the neo-Darwinian theory.
But there is a large and ever-growing body of evidence that heritable changes do occur in living organisms that adapt them to their environments, and that these changes do not stem from simple errors in DNA replication. In the last decade and a half, we have seen mounting evidence on the molecular level of large genomic changes that confer selective advantage and occur just when they are needed. These observations have so far been made only on bacteria. Directed mutations are also known to occur in plants, and there is evidence that they occur in animals as well. These mutations are not random, but are apparently induced by the environment. They are not merely errors in DNA replication, but seem to be mutations of a totally different kind.
In 1982 Barry Hall reported on an experiment in which he prepared a strain of E. coli bacteria lacking the beta-galactosidase gene lacZ, which normally hydrolyzes lactose. When these bacteria grew and multiplied on another nutrient, but in the presence of lactose, they gained the ability to metabolize lactose, an ability that proved to be heritable. The gained ability was found to be due to the presence of a new gene. The new gene encodes a new enzyme that can perform the function of the beta-galactosidase, enabling the mutant bacteria to metabolize lactose. The gene was present all the time, but in a dormant state. It was turned ON by two mutations that occur in the presence of lactose and do not appear in its absence. Hall declared that the "normal function" of this gene is unknown, and he called it a "cryptic" gene.
Neither of these two mutations alone gives the bacterium any advantage, so there could not have been any selection for them separately. For the cryptic gene to become active, both mutations have to occur. In the absence of lactose, these two mutations are independent. They can occur together only by chance, and will do so with a probability of only about 10-18 per replication. If they occur at random and independently, the expected waiting time for one of these double mutations to occur in Hall's population would be about 100,000 years. But in the presence of lactose, he detected about 40 of them in just a few days! One can conclude that the lactose in the environment was inducing these mutations.
In 1981 Ann Reynolds and her coworkers reported experimenting with E. coli, which cannot normally metabolize salicin because they lack an enzyme to catalyze the first step. They found that a mutation involving a DNA rearrangement occurs in the presence of salicin and turns ON what they called a "cryptic" gene, permitting the bacterium to metabolize salicin.
Hall later also grew bacteria in the presence of salicin. He found that Reynolds's cryptic gene encodes an enzyme that can catalyze the first step in the metabolic pathway of salicin. The cryptic gene is normally repressed by a regulatory gene, and it will become active if one of a few mutations occurs in the regulatory gene.
He used a strain of bacteria whose cryptic gene did not work. The cryptic gene did not work because there was an insertion sequence (IS), known as IS103, sitting upstream of the cryptic gene. The IS keeps the gene OFF because it shifts the coding frame and garbles transcription to mRNA.
For a bacterium of Hall's strain to metabolize salicin, two mutations had to occur. First, the sequence IS103 had to be precisely deleted. Then, a base substitution had to occur that would make the regulatory gene stop repressing the cryptic gene. He tried to measure the spontaneous rate of the precise deletion of IS103, and found it too low to measure. He could do no more than assign an upper limit of 2*10-12 to its probability. In the absence of salicin, the probability that the necessary two mutations would occur in a particular cell in a single replication is less than 10-19. That they occurred many orders of magnitude more frequently in the presence of salicin indicates that salicin may be inducing the mutations.
Cairns and his coworkers described another experiment with bacteria. They used a Lac- strain of bacteria, so called because the bacteria had a defective lacZ gene. The strain could not synthesize beta-galactosidase, and therefore could not metabolize lactose. Cairns's team fed lactose to these bacteria and looked for the appearance of mutants that could metabolize it. They found such mutants, and from the statistics of their appearance they concluded that the mutations appeared only in the presence of the lactose. They wrote:
The cells may have mechanisms for choosing which mutations will occur... Bacteria apparently have an extensive armory of such 'cryptic' genes that can be called upon for the metabolism of unusual substrates. The mechanism of activation varies... E. coli turns out to have a cryptic gene that it can call upon to hydrolyze lactose if the usual gene for this purpose has been deleted. The activation of (this cryptic gene) requires at least two mutations... That such events ever occur seems almost unbelievable.
When these experiments were first reported, they were met with skepticism. Their results brought into question the status of the principle of the independence of mutations from the environment. There were many attempts to explain the phenomena as resulting from the same kind of random mutations called for by the neo-Darwinian theory. Although some have suggested that Cairns's results indicate the failure of the principle, others have offered explanations that leave the principle intact. Further experiments, however, have dispelled these notions. Recent studies have shown that, in the presence of lactose, adaptive mutations activating a dormant gene encoding an enzyme that will hydrolyze lactose in E. coli are different from mutations that occur in the absence of lactose.
In addition to these recent observations of nonrandom mutations in prokaryotes, there has long been evidence of nonrandom variation in eukaryotes, including plants and animals. Seventy-five years ago Victor Jollos experimented with Paramecium aurelia, and found an environmentally-induced variation that was heritable. When the environmental stimulus was removed, the variation persisted in subsequent generations. An interesting feature of this work is that the original state of the organism returned after 40 generations without the environmental stimulus.
Many other examples can be given of environmentally-induced variations in plants and animals observed over the past hundred years. Taken together, the evidence indicates nonrandom variation, induced by the environment, may play an important role in evolution. These varied examples share a common feature: namely, that an adaptive variation can appear in large numbers in a population when it is needed. When adaptive nonrandom variation does occur, it is far more frequent than chance, and it appears in a large fraction of the population.
On the one hand, the kind of evolution envisioned by Darwin and the neo-Darwinians cannot occur. Genetic information cannot be built up by random variation even with the directive force of natural selection. Yet we see that plants and animals can change when stimulated by environmental changes. We have even seen molecular details of such changes occuring in bacteria in response to environmental stimuli. Permit me to speculate on how the phenomena of environmentally-directed mutations observed in bacteria might be generalized and extended to multicelled organisms.
Living organisms respond to their environment on several levels. As Jacob and Monod have shown, the genetic control system senses the presence of an enzyme's substrate and turns ON the gene that encodes the enzyme. The cell's control system turns genes ON or OFF as they are needed, but makes no heritable change in the genome. This kind of control permits the organism to operate efficiently through specific short-term changes in the environment.
I have suggested that a straightforward extension of such controls, making their results heritable, can lead to changes in the long-term -- on an evolutionary time scale. Moreover, if the control is in the development process, even a small change of the "right" kind can lead to a large adaptive change in the phenotype. The "right" kind of change is unlikely to occur by chance. If the changes are random, the probability of a "right" change occuring is a function of the fraction of "right" changes among all possible ones. The number of "wrong" changes is so much greater than the number of "right" ones, that a "right" change is unlikely to occur by chance even in large populations and over immense periods of times. But, if the genome had the built-in ability for an adaptive change to be triggered by an environmental cue, then chance would not be a factor -- the right adaptive change would be elicited when it is needed.
What physical mechanism can produce environmentally-induced heritable variations? How could large amounts of genetic information be generated quickly? Environmental influence on the variation does not have to imply that it generate a substantial amount of information in the DNA. Indeed, if the environment merely triggers the genome to switch between n potential states, it generates no more than log2n bits of information. I suggest that most of the information necessary for organisms to adapt to their environment is already present in the organism, and no mysterious or ill-defined mechanism need be invoked to account for the generation of the few bits of information needed to switch.
As has been shown with the bacteria, the environmental trigger induces a change to an adaptive phenotype with no apparent role left to chance. There may, however, be some role here for chance. The environment may trigger not one, but a set, of genetic switches leading to a set of potential phenotypic outcomes. The trigger may be to one switch in some individuals, and to different switches in others. The resulting phenotype would then vary from one individual to another. Natural selection may then favor one of these types. The switch, triggered by the environment, may be seen as a coarse adjustment to adaptivity, and natural selection among the outcomes as a subsequent fine adjustment.
As with many bilogical phenomena, there is, I suggest, more than one mechanism that can lead to nonrandom adabtive variation. Nonrandom variations in the genome induced by the environment can be divided into two types, which I shall call Type I and Type II. Type I produces a change in the DNA base-pair sequence, and Type II produces only a change of state of the genome, but leaves its nucleotide sequence unchanged.
An environmental trigger that could produce a change in the genome was suggested about a decade ago by Wanner. Any change in the genome can be called a mutation, and all mutations have been conventionally assumed to be random and unrelated to the environment. Although many mutations apparently are random in the sense used here, there are many that are not. Single-base substitutions resulting from uncorrected random errors in DNA replication are indeed random. Their effects on the phenotype are independent of environmental influences, and they occur without regard for the organism's needs. They are simply errors.
As opposed to errors in the working of the complex cellular mechanism, there are mutations that are not errors, but that are directed by that cellular mechanism. They are called up when they are needed, and they are executed under tight cellular control. Genetic rearrangements, including insertions, deletions, amplifications, and inversions, have been observed to be under strict cellular control. An insertion sequence sometimes enters a gene and prevents it from transcribing its protein. The cell can also remove the sequence with perfect precision, allowing the gene to return to working order. Until recently, genetic rearrangements were thought to have no known functions and, therefore, were considered to be random. The experiments of Hall, Cairns, and others, however, have shown that through insertions, deletions, and point mutations, a bacterium can turn ON a cryptic gene, permitting the cell to metabolize a carbon source that was otherwise denied it.
A transposon is a large mobile genetic element that can transpose itself from one place to another in the genome. It carries within itself genes encoding some of the enzymes needed for its transposition, and the cell provides several others. Some transposons serve their host bacterium by carrying genes granting resistance to antibiotics. The jumping of a transposon is one of several kinds of genetic rearrangements. Deletion, amplification, and inversion are others.
Three adaptive deletions have been found in prokaryotes that are triggered by environmental cues. Two of them are found in the cyanobacterium Anabaena. If these cells are deprived of a nitrogen source, a DNA sequence is deleted from the genome. Deletions of sections 11 kilobases (kb) long and 55 kb long have been found. These deletions lead to the cell becoming dormant, presumably in an effort to conserve nitrogen. An adaptive deletion of 42 kb has also been found to occur in the bacterium Bacillus subtilis. Environmentally-induced DNA changes have also been reported in plants. The mechanisms of these inductions, however, are as yet unknown.
Many important genetic rearrangements are known to be nonrandom. Some are nonrandom in time, and all are nonrandom in their position on the genome. Some occur when they are needed, and seem to be environmentally induced (as in the bacteria in Cairns's and in Hall's experiments). And some, which are always needed and therefore do not require environmental induction, occur at random times in just the right place on the genome where they can be effective (as in Salmonella and in Trypanosoma). These nonrandom mutations are executed with precision and are under elaborate cellular control. Although genetic rearrangements have until now been thought of as random, to dismiss them as nothing more than chance events in an attempt to preserve the neo-Darwinian theory would be to ignore what nature is telling us. Various forms of genetic rearrangement help make up the set of Type I environmentally-induced adaptive variations.
Type II variations are different. They are environmentally-induced changes in only the state of the genome and they do not change the DNA sequence. Genetic states that dictate the current metabolic activity of the cell through enzyme synthesis are usually not heritable. A gene that is normally OFF needs a signal to turn it ON. It will be ON only as long as the turn-ON signal is present. Take that signal away, and the gene turns OFF. Similarly with a gene that is normally ON. The state of a gene or an operon depends on the presence of inducing or repressing signals, and is usually not heritable.
But some altered genetic states are known to be heritable. The most outstanding examples of heritable genetic states are the programmed changes that occur in embryonic development. As cells differentiate, genes are selectively turned ON and OFF, and their ON/OFF states are passed from mother to daughter cell: the state is heritable. Not every method of turning genes ON and OFF lends itself to being heritable. How cells during development pass on their genetic state to daughter cells is not yet well understood.
One way the cell has of establishing a genetic state and making it heritable is to attach a methyl group to one of the carbon atoms of the cytosine bases in the DNA. Methylation serves to keep the gene OFF by preventing regulatory proteins from attaching. When the cell wants to turn ON selected genes, it first has to remove the methyl groups, and then apply the regulatory protein. Methylation has been suggested as one of the ways in which the organism might control gene activity during development. The oattern of methylation is made heritable through an enzyme that acts during DNA replication. The enzyme copies the methylation pattern from the template strand of DNA onto the daughter strand as it is being constructed.
Another way of making a genetic state heritable is to have the gene (or operon) turn ON and OFF with a locking trigger. A trigger that turns ON an operon will lock it ON, if the operon's activity itself leads to the synthesis of a control protein that keeps it ON. Once such an operon is turned ON, it will remain ON even after the trigger is removed. Such a state can be heritable.
So far I have indicated only how an environmental cue can enter an exposed cell and cause a heritable effect on its genome. How could an environmental cue produce a heritable change in a plant or animal? Having the environment cause a heritable effect on an exposed cell is one thing. But creating a heritable effect on a multicelled organism, where the reproductive cells are isolated from the somatic cells, seems to be something entirely different.
Environmentally-induced changes seem more difficult to achieve in plants or animals than in single cells. For one thing, in multicelled organisms the environmental cue has to penetrate to the reproductive cells, or gametes. Second, the environmental stimuli that have been seen to trigger bacteria are relatively simple, while those that would trigger plants or animals would have to be complex. Can complex environmental cues get into the organism? We know they can because animals, for example, adjust their birth rate to match available resources. Plants are known to adjust their seed production to the available space. We know that complex stimuli can enter through the sense organs and be processed by the brain to produce appropriate physiological or psychological states such as stress. These states stimulate the production of hormones, which travel throughout the bloodstream to reach their targets in any part of the body. The target sites of some of these hormones could well be the reproductive cells, and there they could make a heritable change. These suggestions must remain speculative for the present until more is known of how environmental cues act. But herein lies a possible mechanism for observed evolution.
Nonrandom Evolution And Torah Hashkafa
I have here described the important role neo-Darwinian theory assigns to random variation. I have also noted that random mutations are unable to play that role, and that all the evidence points to the absence of a major role for randomness in evolution. I have suggested, instead, that any significant evolutionary change that occurs does so principally through nonrandom variations induced by environmental cues. A role has been left open for randomness and for natural selection, but this role is much attenuated from that assigned to randomness by neo-Darwinian theory. I suggest that changes in the environment drive the evolution of living organisms by triggering a switch to alternative genetic programs. The organisms have the built-in capability to adapt to a wide variety of environments.
How does this suggestion fit with Torah hashkafa? Is there room for such evolution, or for any evolution at all, within Torah? It turns out that the suggestion made here is derivable from Talmudic sources. Rabbi David Luria (RaDaL) indeed made such a derivation in his commentary to the Midrash Pirkei D'Rebbi Eliezer. From Talmudic and Midrashic sources he derived the necessity of animals to evolve. As Rabbi Luria interpreted the Midrash, there were 365 basic species (minim) of beasts created, and the same number of birds. All the others were derived from these. As each basic species moved into a different environment and found itself a new niche, it changed. The changes were dictated by the conditions under which it lived, including the food it ate. Rabbi Luria's conclusion is very much like the suggestion presented here.
The basic species, according to Rabbi Luria, can transform into new species as they are influenced by the conditions under which they live. The Sages of the Talmud (Tanaim, Amoraim) as well as the medieval commentaries (Rishonim) were aware of the phenomenon of domestic animals changing their characteristics in a heritable way when they become feral and vice versa. There is discussion in the Talmud, and in subsequent rabbinical literature, on whether the domestic ox (shor) and the wild ox (shor habar) are of the same or different species. One opinion is that the wild ox was originally a domestic ox that became feral, and therefore they are the same species (min). Another opinion is that the two animals belong to two entirely different species (minim).
It is well known that feral animals change on domestication. Darwin noted that domestic cattle undergo changes when they become feral. Some of the pigs given to the Maoris by DeSurville in 1769 and by Captain Cook in 1773 became feral. They were observed to have become very wild, cunning, and speedy. They were very different from the domestic pigs from which they descended. These wild pigs were indistinguishable from wild pigs elsewhere. It has also been observed that whenever wild pigs become domesticated the tusks of the boars become very much reduced, they lose their bristles, and the young are no longer striped. These changes are hereditary but the animals gradually revert back if they again become feral. Moreover, the changes from domestic to feral are always the same. They are not random.
Lurian evolution is the change of an organism under environmental influence. It is not the neo-Darwinian kind, but rather the kind I have just described. Conventional wisdom in biology states that characteristics aquired by an organism cannot be inherited. The environment may change an animal, but these changes have conventionally been held not to be heritable.
According to the central dogma of molecular genetics, the environment cannot cause an organized change in the genome. There is no way an outside influence can alter the genetic program in such a way as to make the organism adapt to that influence. There would have to be some way of reversing the genetic coding, and that seems difficult to do. One biologist has recently speculated how such an influence might be possible, but no one else seems to accept his speculation. The heritable variations appearing in living creatures are conventionally thought to be random in the sense that their effects on the phenotype are independent of the environment.
If living organisms had within their genes not just one development program, but several alternative programs that could be called up by a cue from the environment, then we could see how the environment can influence the genes. The environment can change the structure of the genes, or it can change their state. It can turn genes ON and others OFF in such a way that the new state is heritable.
The randomness of the variations claimed by neo-Darwinian theory, and which is essential to it, stands in major contradiction to Torah hashkafa. The neo-Darwinians need the randomness to arrive at a "natural" explanation for the development of life from a simple beginning. Had it worked, they would have reduced the development of life to a simple natural law. Much like the way the law of gravity accounts for a falling rock, so neo-Darwinian theory would have accounted for the development of life, from a simple unicellular organism to the great complexity of life we see today. Had they done that, they would have made the appearance of man a mere chance event.
Had the neo-Darwinians succeeded in establishing their case, the Torah believer would have had two choices: (1) he could simply reject the theory, believing that no matter how good the logic seems to be, there must be something wrong with it; or (2) he could engage in apologetics to show how the Creator and creation could be accommodated by smuggling Divine control into the randomness.
The first of these choices has been the one favored over the centuries. It is a robust choice, and has proved its merit as one scientific theory after another has fallen by the wayside. The second choice would be unsatisfying to any but the most commited accommodationist. If the theory works well, then a creation explanation is superfluous. But the Torah believer is not faced with this dilemma. I have shown that the neo-Darwinians have not successfully established their case, and randomness therefore cannot play the important role they have assigned it.
Evolutionists will not easily accept my solution. The suggestion that living organisms have within themselves the ability to be switched ON or OFF by a cue from the environment may be satisfactory as an explanation of special cases of evolution, but evolutionists will not accept it as the explanation of evolution in general. They will ask for my explanation (on scientific, not on supernatural, grounds) how such a built-in capability arose. How could such a general capability have arisen in the development of life from a simple beginning? How could my suggestion fit the neo-Darwinian agenda?
I reject the neo-Darwinian agenda. I do not attempt to explain on a scientific basis how the built-in capabilities arose. I do not attempt to explain the development of life from a simple beginning; nor do I attempt to explain the buildup of complexity from a simple beginning to the forms of life we find today. Such development and buildup have never been observed, and there is thus no imperative for a theory to account for them. The inability of my hypothesis to account for the spontaneous origin and development of life, events that have never been observed, is not a valid criticism of it. If the hypothesis can account for observed adaptations, it will be doing all that should be asked of it.
To read more about these topics check out the book Not By Chance
Back to Torah and Science | http://rbsp.info/rbs/RbS/CLONE/VGS/spetner_evol1.html | 13 |
56 | Chapter 15 Classes and objects
15.1 User-defined types
In mathematical notation, points are often written in parentheses with a comma separating the coordinates. For example, (0,0) represents the origin, and (x,y) represents the point x units to the right and y units up from the origin.
There are several ways we might represent points in Python:
Creating a new type is (a little) more complicated than the other options, but it has advantages that will be apparent soon.
class Point(object): """Represents a point in 2-D space."""
Defining a class named Point creates a class object.
>>> print Point <class '__main__.Point'>
The class object is like a factory for creating objects. To create a Point, you call Point as if it were a function.
>>> blank = Point() >>> print blank <__main__.Point instance at 0xb7e9d3ac>
You can assign values to an instance using dot notation:
>>> blank.x = 3.0 >>> blank.y = 4.0
This syntax is similar to the syntax for selecting a variable from a module, such as math.pi or string.whitespace. In this case, though, we are assigning values to named elements of an object. These elements are called attributes.
As a noun, “AT-trib-ute” is pronounced with emphasis on the first syllable, as opposed to “a-TRIB-ute,” which is a verb.
The following diagram shows the result of these assignments. A state diagram that shows an object and its attributes is called an object diagram; see Figure 15.1.
The variable blank refers to a Point object, which contains two attributes. Each attribute refers to a floating-point number.
You can read the value of an attribute using the same syntax:
>>> print blank.y 4.0 >>> x = blank.x >>> print x 3.0
The expression blank.x means, “Go to the object blank refers to and get the value of x.” In this case, we assign that value to a variable named x. There is no conflict between the variable x and the attribute x.
You can use dot notation as part of any expression. For example:
>>> print '(%g, %g)' % (blank.x, blank.y) (3.0, 4.0) >>> distance = math.sqrt(blank.x**2 + blank.y**2) >>> print distance 5.0
def print_point(p): print '(%g, %g)' % (p.x, p.y)
>>> print_point(blank) (3.0, 4.0)
Write a function called
Sometimes it is obvious what the attributes of an object should be, but other times you have to make decisions. For example, imagine you are designing a class to represent rectangles. What attributes would you use to specify the location and size of a rectangle? You can ignore angle; to keep things simple, assume that the rectangle is either vertical or horizontal.
There are at least two possibilities:
Here is the class definition:
class Rectangle(object): """Represents a rectangle. attributes: width, height, corner. """
The docstring lists the attributes: width and height are numbers; corner is a Point object that specifies the lower-left corner.
To represent a rectangle, you have to instantiate a Rectangle object and assign values to the attributes:
box = Rectangle() box.width = 100.0 box.height = 200.0 box.corner = Point() box.corner.x = 0.0 box.corner.y = 0.0
The expression box.corner.x means, “Go to the object box refers to and select the attribute named corner; then go to that object and select the attribute named x.”
Figure 15.2 shows the state of this object. An object that is an attribute of another object is embedded.
15.4 Instances as return values
Functions can return instances. For example,
def find_center(rect): p = Point() p.x = rect.corner.x + rect.width/2.0 p.y = rect.corner.y + rect.height/2.0 return p
Here is an example that passes box as an argument and assigns the resulting Point to center:
>>> center = find_center(box) >>> print_point(center) (50.0, 100.0)
15.5 Objects are mutable
You can change the state of an object by making an assignment to one of its attributes. For example, to change the size of a rectangle without changing its position, you can modify the values of width and height:
box.width = box.width + 50 box.height = box.width + 100
You can also write functions that modify objects. For example,
def grow_rectangle(rect, dwidth, dheight): rect.width += dwidth rect.height += dheight
Here is an example that demonstrates the effect:
>>> print box.width 100.0 >>> print box.height 200.0 >>> grow_rectangle(box, 50, 100) >>> print box.width 150.0 >>> print box.height 300.0
Inside the function, rect is an alias for box, so if the function modifies rect, box changes.
Write a function named
Aliasing can make a program difficult to read because changes in one place might have unexpected effects in another place. It is hard to keep track of all the variables that might refer to a given object.
Copying an object is often an alternative to aliasing. The copy module contains a function called copy that can duplicate any object:
>>> p1 = Point() >>> p1.x = 3.0 >>> p1.y = 4.0 >>> import copy >>> p2 = copy.copy(p1)
p1 and p2 contain the same data, but they are not the same Point.
>>> print_point(p1) (3.0, 4.0) >>> print_point(p2) (3.0, 4.0) >>> p1 is p2 False >>> p1 == p2 False
The is operator indicates that p1 and p2 are not the same object, which is what we expected. But you might have expected == to yield True because these points contain the same data. In that case, you will be disappointed to learn that for instances, the default behavior of the == operator is the same as the is operator; it checks object identity, not object equivalence. This behavior can be changed—we’ll see how later.
>>> box2 = copy.copy(box) >>> box2 is box False >>> box2.corner is box.corner True
Figure 15.3 shows what the object diagram looks like. This operation is called a shallow copy because it copies the object and any references it contains, but not the embedded objects.
For most applications, this is not what you want. In this example,
Fortunately, the copy module contains a method named deepcopy that copies not only the object but also the objects it refers to, and the objects they refer to, and so on. You will not be surprised to learn that this operation is called a deep copy.
>>> box3 = copy.deepcopy(box) >>> box3 is box False >>> box3.corner is box.corner False
box3 and box are completely separate objects.
Write a version of
>>> p = Point() >>> print p.z AttributeError: Point instance has no attribute 'z'
>>> type(p) <type '__main__.Point'>
>>> hasattr(p, 'x') True >>> hasattr(p, 'z') False
The first argument can be any object; the second argument is a string that contains the name of the attribute.
Swampy (see Chapter 4) provides a module named World, which defines a user-defined type also called World. You can import it like this:
from swampy.World import World
Or, depending on how you installed Swampy, like this:
from World import World
The following code creates a World object and calls the mainloop method, which waits for the user.
world = World() world.mainloop()
A window should appear with a title bar and an empty square.
We will use this window to draw Points,
Rectangles and other shapes.
Add the following lines before calling
canvas = world.ca(width=500, height=500, background='white') bbox = [[-150,-100], [150, 100]] canvas.rectangle(bbox, outline='black', width=2, fill='green4')
You should see a green rectangle with a black outline. The first line creates a Canvas, which appears in the window as a white square. The Canvas object provides methods like rectangle for drawing various shapes.
bbox is a list of lists that represents the “bounding box” of the rectangle. The first pair of coordinates is the lower-left corner of the rectangle; the second pair is the upper-right corner.
You can draw a circle like this:
canvas.circle([-25,0], 70, outline=None, fill='red')
The first parameter is the coordinate pair for the center of the circle; the second parameter is the radius.
If you add this line to the program, the result should resemble the national flag of Bangladesh (see http://en.wikipedia.org/wiki/Gallery_of_sovereign-state_flags).
I have written a small program that lists the available colors; you can download it from http://thinkpython.com/code/color_list.py.
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/thinkpython/html/thinkpython016.html | 13 |
55 | Ray tracing (graphics)
||This article needs additional citations for verification. (March 2008)|
In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television visual effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).
Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
Scenes in ray tracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.
Detailed description of ray tracing computer algorithm and its genesis
What happens in nature
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength colour in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.
Ray casting algorithm
The first ray tracing algorithm used for rendering was presented by Arthur Appel in 1968. This algorithm has since been termed "ray casting". The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.
Recursive ray tracing algorithm
The next important research breakthrough came from Turner Whitted in 1979. Previous algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Whitted continued the process. When a ray hits a surface, it can generate up to three new types of rays: reflection, refraction, and shadow. A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it. This recursive ray tracing added more realism to ray traced images.
Advantages over other rendering methods
Ray tracing's popularity stems from its basis in a realistic simulation of lighting over other rendering methods (such as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. Relatively simple to implement yet yielding impressive visual results, ray tracing often represents a first foray into graphics programming. The computational independence of each ray makes ray tracing amenable to parallelization.
A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed.
Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required.
The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (photon mapping, path tracing), give far more accurate simulation of real-world lighting.
It is also possible to approximate the equation using ray casting in a different way than what is traditionally considered to be "ray tracing". For performance, rays can be clustered according to their direction, with rasterization hardware and depth peeling used to efficiently sum the rays.
Reversed direction of traversal of scene by the rays
The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards ray tracing to mean shooting rays from the lights and gathering the results. Therefore it is clearer to distinguish eye-based versus light-based ray tracing.
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.
Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points. The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.
To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.
First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.
As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection between a ray and a sphere. In vector notation, the equation of a sphere with center and radius is
Any point on a ray starting from point with direction (here is a unit vector) can be written as
where is its distance between and . In our problem, we know , , (e.g. the position of a light source) and , and we need to find . Therefore, we substitute for :
Let for simplicity; then
Knowing that d is a unit vector allows us this minor simplification:
This quadratic equation has solutions
The two values of found by solving this equation are the two ones such that are the points where the ray intersects the sphere.
Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from with opposite direction).
If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere.
Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.
The normal to the sphere is simply
where is the intersection point found before. The reflection direction can be found by a reflection of with respect to , that is
Thus the reflected ray has equation
Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.
This is merely the math behind the line–sphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an example of the algorithms used.
Adaptive depth control
This means that we stop generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. You must always set a certain maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced.
Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 * 0.5 = 0.25, the third: 0.25 * 0.5 = 0.125, the fourth: 0.125 * 0.5 = 0.0625, the fifth: 0.0625 * 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution.
For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenbergfound that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.
We enclose groups of objects in sets of hierarchical bounding volumes and first test for intersection with the bounding volume, and then only if there is an intersection, against the objects enclosed by the volume.
Bounding volumes should be easy to test for intersection, for example a sphere or box (slab). The best bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin then a sphere will enclose mainly empty space and a box is much better. Boxes are also easier for hierarchical bounding volumes.
Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and we would have a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes:
- Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects.
- The volume of each node should be minimal.
- The sum of the volumes of all bounding volumes should be minimal.
- Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree.
- The time spent constructing the hierarchy should be much less than the time saved by using it.
In real time
The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer graphics conference as the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel network distributed ray-tracing system that achieved several frames per second in rendering performance. This performance was attained by means of the highly optimized yet platform independent LIBRT ray-tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and developed today as Open source software.
Since then, there have been considerable efforts and research towards implementing ray tracing in real time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.
The OpenRT project includes a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been designed to accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be rendered via ray tracing without intensive resource usage.
On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29 frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.
At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion. Nvidia has shipped over 350,000,000 OptiX capable GPUs as of April 2013. OptiX-based renderers are used in Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers.
Imagination Technologies offers a free API called OpenRL which accelerates tail recursive ray tracing-based rendering algorithms and, together with their proprietary ray tracing hardware, works with Autodesk Maya to provide what 3D World calls "real-time raytracing to the everyday artist".
- Beam tracing
- Cone tracing
- Distributed ray tracing
- Global illumination
- List of ray tracing software
- Parallel computing
- Specular reflection
- Appel A. (1968) Some techniques for shading machine rendering of solids. AFIPS Conference Proc. 32 pp.37-45
- Whitted T. (1979) An improved illumination model for shaded display. Proceedings of the 6th annual conference on Computer graphics and interactive techniques
- Tomas Nikodym (June 2010). "Ray Tracing Algorithm For Interactive Applications". Czech Technical University, FEE.
- A. Chalmers, T. Davis, and E. Reinhard. Practical parallel rendering, ISBN 1-56881-179-9. AK Peters, Ltd., 2002.
- GPU Gems 2, Chapter 38. High-Quality Global Illumination Rendering Using Rasterization, Addison-Wesley
- Eric P. Lafortune and Yves D. Willems (December 1993). "Bi-Directional Path Tracing". Proceedings of Compugraphics '93: 145–153.
- Péter Dornbach. "Implementation of bidirectional ray tracing algorithm". Retrieved 2008-06-11.
- Global Illumination using Photon Maps
- Photon Mapping - Zack Waters
- See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98.
- "About BRL-CAD". Retrieved 2009-07-28.
- Piero Foscari. "The Realtime Raytracing Realm". ACM Transactions on Graphics. Retrieved 2007-09-17.
- Mark Ward (March 16, 2007). "Rays light up life-like graphics". BBC News. Retrieved 2007-09-17.
- Theo Valich (June 12, 2008). "Intel converts ET: Quake Wars to ray tracing". TG Daily. Retrieved 2008-06-16.
- Nvidia (October 18, 2009). "Nvidia OptiX". Nvidia. Retrieved 2009-11-06.
- "3DWorld: Hardware review: Caustic Series2 R2500 ray-tracing accelerator card". Retrieved 2013-04-23.3D World, April 2013
|Wikimedia Commons has media related to: Ray tracing|
- What is ray tracing ?
- Ray Tracing and Gaming - Quake 4: Ray Traced Project
- Ray tracing and Gaming - One Year Later
- Interactive Ray Tracing: The replacement of rasterization?
- A series of tutorials on implementing a raytracer using C++
- Tutorial on implementing a raytracer in PHP
- The Compleat Angler (1978) | http://en.wikipedia.org/wiki/Ray_tracing_(graphics) | 13 |
82 | On some bicycles, there is only one gear and the gear ratio is fixed. Many contemporary bicycles have multiple gears and thus multiple gear ratios. A shifting mechanism allows selection of the appropriate gear ratio for efficiency or comfort under the prevailing circumstances: for example, it may be comfortable to use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. Different gear ratios and gear ranges are appropriate for different people and styles of cycling.
A cyclist's legs produce power optimally within a narrow pedalling speed range, or cadence. Gearing is optimized to use this narrow range as best as possible. As in other types of transmissions, the gear ratio is closely related to the mechanical advantage of the drivetrain of the bicycle. On single-speed bicycles and multi-speed bicycles using derailleur gears, the gear ratio depends on the ratio of the number of teeth on the chainring to the number of teeth on the rear sprocket (cog). For bicycles equipped with hub gears, the gear ratio also depends on the internal planetary gears within the hub. For a shaft-driven bicycle the gear ratio depends on the bevel gears used at each end of the shaft.
For a bicycle to travel at the same speed, using a lower gear (larger mechanical advantage) requires the rider to pedal at a faster cadence, but with less force. Conversely, a higher gear (smaller mechanical advantage) provides a higher speed for a given cadence, but requires the rider to exert greater force. Different cyclists may have different preferences for cadence and pedaling force. Prolonged exertion of too much force in too high a gear at too low a cadence can increase the chance of knee damage; cadence above 100 rpm becomes less effective after short bursts, as during a sprint.
Measuring gear ratios
There are at least four different methods for measuring gear ratios: gear inches, metres of development (roll-out), gain ratio, and quoting the number of teeth on the front and rear sprockets respectively. The first three methods result in each possible gear ratio being represented by a single number which allows the gearing of any bicycles to be compared; the numbers produced by different methods are not comparable, but for each method the larger the number the higher the gear. The fourth method uses two numbers and is only useful in comparing bicycles with the same drive wheel diameter (in the case of road bikes, this is almost universally 622 mm, as defined by the 700c standard).
Front/rear measurement only considers the sizes of a chainring and a rear sprocket. Gear inches and metres of development also take the size of the rear wheel into account. Gain ratio goes further and also takes the length of a pedal crankarm into account.
Gear inches and metres of development are closely related: to convert from gear inches to metres of development, multiply by 0.08 (more exactly: 0.0798, or precisely: ).
The methods of calculation which follow assume that any hub gear is in direct drive. Multiplication by a further factor is needed to allow for any other selected hub gear ratio (many online gear calculators have these factors built in for various popular hub gears).
- Gear inches = Diameter of drive wheel in inches × (number of teeth in front chainring / number of teeth in rear sprocket). Normally rounded to nearest whole number.
- Metres of development = Circumference of drive wheel in metres × (number of teeth in front chainring / number of teeth in rear sprocket).
- Gain ratio = (Radius of drive wheel / length of pedal crank) × (number of teeth in front chainring / number of teeth in rear sprocket). Measure radius and length in same units.
- Both metres of development and gain ratios are normally rounded to one decimal place.
- Gear inches corresponds to the diameter (in inches) of the main wheel of an old-fashioned penny-farthing bicycle with equivalent gearing. Metres of development corresponds to the distance (in metres) traveled by the bicycle for one rotation of the pedals. Gain ratio is the ratio between the distance travelled by the bicycle and the distance travelled by a pedal, and is a pure number, independent of any units of measurement.
- Front/rear gear measurement uses two numbers (e.g. 53/19) where the first is the number of teeth in the front chainring and the second is the number of teeth in the rear sprocket. Without doing some arithmetic, it is not immediately obvious that 53/19 and 39/14 represent effectively the same gear ratio.
The following table provides some comparison of the various methods of measuring gears (the particular numbers are for bicycles with 170 mm cranks, 700C wheels, and 25mm tyres). Speeds for several cadences in revolutions per minute are also given. On each row the relative values for gear inches, metres of development, gain ratio, and speed are more or less correct, while the front/rear values are the nearest approximation which can be made using typical chainring and cogset sizes. Note that bicycles intended for racing may have a lowest gear of around 45 gear inches (or 35 if fitted with a compact crankset).
|60 rpm||80 rpm||100 rpm||120 rpm|
|Medium||70||5.6||5.2||53/19 or 39/14||12.5||20||16.6||26.7||21||33.6||25||40|
Single speed bicycles
A single-speed bicycle is a type of bicycle with a single gear ratio. These bicycles are without derailleur gears, hub gearing or other methods for varying the gear ratio of the bicycle. Adult single-speed bicycles typically have a gear ratio of between 55 and 75 gear inches, depending on the rider and the anticipated usage.
There are many types of modern single speed bicycles; BMX bicycles, some bicycles designed for (younger) children, cruiser type bicycles, classic commuter bicycles, unicycles, bicycles designed for track racing, fixed-gear road bicycles, and fixed-gear mountain bicycles.
The fixed-gear single-speed bicycle is the most basic type of bicycle. A fixed-gear bike does not have a freewheel mechanism to allow coasting.
General considerations
The gearing supplied by the manufacturer on a new bicycle is selected to be useful to the majority of people. Some cyclists choose to fine-tune the gearing to better suit their strength, level of fitness, and expected usage. When buying from specialist cycle shops, it may be less expensive to get the gears altered before delivery rather than at some later date. Modern crankset chainrings can be swapped out, as can cogsets.
While long steep hills and/or heavy loads may indicate a need for lower gearing, this can result in a very low speed. Balancing a bicycle becomes more difficult at lower speeds. For example, a bottom gear around 16 gear inches gives an effective speed of perhaps 3 miles/hour (5 km/hour) or less, at which point it might be quicker to walk.
Relative gearing
As far as a cyclist's legs are concerned, when changing gears, the relative difference between two gears is more important than the absolute difference between gears. This relative change, from a lower gear to a higher gear, is normally expressed as a percentage, and is independent of what system is used to measure the gears. Cycling tends to feel more comfortable if nearly all gear changes have more or less the same percentage difference. For example, a change from a 13-tooth sprocket to a 15-tooth sprocket (15.4%) feels very similar to a change from a 20-tooth sprocket to a 23-tooth sprocket (15%), even though the latter has a larger absolute difference.
To achieve such consistent relative differences the absolute gear ratios should be in logarithmic progression; most off-the-shelf cogsets do this with small absolute differences between the smaller sprockets and increasingly larger absolute differences as the sprockets get larger. Because sprockets must have a (relatively small) whole number of teeth it is impossible to achieve a perfect progression; for example the seven derailleur sprockets 14-16-18-21-24-28-32 have an average step size of around 15% but with actual steps varying between 12.5% and 16.7%. The epicyclic gears used within hub gears have more scope for varying the number of teeth than do derailleur sprockets, so it may be possible to get much closer to the ideal of consistent relative differences, e.g. the Rohloff Speedhub offers 14 speeds with an average relative difference of 13.6% and individual variations of around 0.1%.
Racing cyclists often have gears with a small relative difference of around 7% to 10%; this allows fine adjustment of gear ratios to suit the conditions and maintain a consistent pedalling speed. Mountain bikes and hybrid bikes often have gears with a moderate relative difference of around 15%; this allows for a much larger gear range while having an acceptable step between gears. 3-speed hub gears may have a relative difference of some 33% to 37%; such big steps require a very substantial change in pedalling speed and often feel excessive. A step of 7% corresponds to a 1-tooth change from a 14-tooth sprocket to a 15-tooth sprocket, while a step of 15% corresponds to a 2-tooth change from a 13-tooth sprocket to a 15-tooth sprocket.
By contrast, car engines deliver power over a much larger range of speeds than cyclists' legs do, so relative differences of 30% or more are common for car gearboxes.
Usable gears
On a bicycle with only one gear change mechanism (e.g. rear hub only or rear derailleur only), the number of possible gear ratios is the same as the number of usable gear ratios, which is also the same as the number of distinct gear ratios.
On a bicycle with more than one gear change mechanism (e.g. front and rear derailleur), these three numbers can be quite different, depending on the relative gearing steps of the various mechanisms. The number of gears for such a derailleur equipped bike is often stated simplistically, particularly in advertising, and this may be misleading.
Consider a derailleur-equipped bicycle with 3 chainrings and an 8-sprocket cogset:
- the number of possible gear ratios is 24 (=3×8, this is the number usually quoted in advertisements);
- the number of usable gear ratios is 22;
- the number of distinct gear ratios is typically 16 to 18.
The combination of 3 chainrings and an 8-sprocket cogset does not result in 24 usable gear ratios. Instead it provides 3 overlapping ranges of 7, 8, and 7 gear ratios. The outer ranges only have 7 ratios rather than 8 because the extreme combinations (largest chainring to largest rear sprocket, smallest chainring to smallest rear sprocket) result in a very diagonal chain alignment which is inefficient and causes excessive chain wear. Due to the overlap, there will usually be some duplicates or near-duplicates, so that there might only be 16 or 18 distinct gear ratios. It may not be feasible to use these distinct ratios in strict low-high sequence anyway due to the complicated shifting patterns involved (e.g. simultaneous double or triple shift on the rear derailleur and a single shift on the front derailleur). In the worst case there could be only 10 distinct gear ratios, if the percentage step between chainrings is the same as the percentage step between sprockets. However, if the most popular ratio is duplicated then it may be feasible to extend the life of the gear set by using different versions of this popular ratio.
Gearing range
The gearing range indicates the difference between bottom gear and top gear, and provides some measure of the range of conditions (high speed versus steep hills) with which the gears can cope; the strength, experience, and fitness level of the cyclist are also significant. A range of 300% or 3:1 means that for the same pedalling speed a cyclist could travel 3 times as fast in top gear as in bottom gear (assuming sufficient strength, etc.). Conversely, for the same pedalling effort, a cyclist could climb a much steeper hill in bottom gear than in top gear.
The overlapping ranges with derailleur gears mean that 24 or 27 speed derailleur gears may only have the same total gear range as a (much more expensive) Rohloff 14-speed hub gear. Internal hub geared bikes typically have a more restricted gear range than comparable derailleur-equipped bikes, and have fewer ratios within that range.
The approximate gear ranges which follow are merely indicative of typical gearing setups, and will vary somewhat from bicycle to bicycle.
- 180% 3-speed hub gears
- 250% 5-speed hub gears
- 300% 7-speed hub gears
- 307% 8-speed hub gears
- 327% typical 1 chainring derailleur setup (1x10, 11-36)
- 350% NuVinci continuously variable transmission
- 409% 11-speed hub gears
- 420% extreme 1 chainring derailleur setup (1x11, SRAM XX1)
- 428% typical 2 chainring derailleur setup (2x10, 50-34 x 11-32)
- 526% typical 3 chainring derailleur setup (3x10); Rohloff Speedhub 14-speed hub gear
- 636% 18-speed bottom bracket gearbox
- 698% touring 3 chainring derailleur setup (3x10)
Gear ranges of almost 700% can be achieved on derailleur setups, though this may result in some rather large steps between gears or some awkward gear changes. However, through the careful choice of chainrings and rear cogsets, e.g. 3 chainrings 48-34-20 and a 10-speed cassette 11-32, one can achieve an extremely wide range of gears that are still well spaced. This sort of setup has proven useful on a multitude of bicycles such as cargo bikes, touring bikes and tandems. Even higher gear ranges can be achieved by using a 2-speed bottom bracket hub gear in conjunction with suitable derailleurs.
Types of gear change mechanisms
There are two main types of gear change mechanisms, known as derailleurs and hub gears. These two systems have both advantages and disadvantages relative to each other, and which type is preferable depends very much on the particular circumstances. There are a few other relatively uncommon types of gear change mechanism which are briefly mentioned near the end of this section. Derailleur mechanisms can only be used with chain drive transmissions, so bicycles with belt drive or shaft drive transmissions must either be single speed or use hub gears.
External (derailleur)
External gearing is so called because all the sprockets involved are readily visible. There may be up to 3 chainrings attached to the crankset and pedals, and typically between 5 and 11 sprockets making up the cogset attached to the rear wheel. Modern front and rear derailleurs typically consist of a moveable chain-guide that is operated remotely by a Bowden cable attached to a shifter mounted on the down tube, handlebar stem, or handlebar. A shifter may be a single lever, or a pair of levers, or a twist grip; some shifters may be incorporated with brake levers into a single unit. When a rider operates the shifter while pedalling, the change in cable tension moves the chain-guide from side to side, "derailing" the chain onto different sprockets. The rear derailleur also has spring-mounted jockey wheels which take up any slack in the chain.
Most hybrid, touring, mountain, and racing bicycles are equipped with both front and rear derailleurs. There are a few gear ratios which have a straight chain path, but most of the gear ratios will have the chain running at an angle. The use of two derailleurs generally results in some duplicate or near duplicate gear ratios, so that the number of distinct gear ratios is typically around two-thirds of the number of advertised gear ratios. The more common configurations have specific names which are usually related to the relative step sizes between the front chainrings and the rear cogset.
Crossover gearing
This style is commonly found on mountain, hybrid, and touring bicycles with three chainrings. The relative step on the chainrings (say 25% to 35%) is typically around twice the relative step on the cogset (say 15%), e.g. chainrings 28-38-48 and cogset 12-14-16-18-21-24-28.
Advantages of this arrangement include:
- A wide range of gears may be available suitable for touring and for off-road riding.
- There is seldom any need to change both front and rear derailleurs simultaneously so it is generally more suitable for casual or inexperienced cyclists.
One disadvantage is that the overlapping gear ranges result in a lot of duplication or near-duplication of gear ratios.
Multi-range gearing
This style is commonly found on racing bicycles with two chainrings. The relative step on the chainrings (say 35%) is typically around three or four times the relative step on the cogset (say 8% or 10%), e.g. chainrings 39-53 and close-range cogsets 12-13-14-15-16-17-19-21 or 12-13-15-17-19-21-23-25. This arrangement provides much more scope for adjusting the gear ratio to maintain a constant pedalling speed, but any change of chainring must be accompanied by a simultaneous change of 3 or 4 sprockets on the cogset if the goal is to switch to the next higher or lower gear ratio.
Alpine gearing
This term has no generally accepted meaning. Originally it referred to a gearing arrangement which had one especially low gear (for climbing Alpine passes); this low gear often had a larger than average jump to the next lowest gear. In the 1960s the term was used by salespeople to refer to then current 10-speed bicycles (2 chainrings, 5-sprocket cogset), without any regard to its original meaning. The nearest current equivalent to the original meaning can be found in the Shimano Megarange cogsets, where most of the sprockets have roughly a 15% relative difference, except for the largest sprocket which has roughly a 30% difference; this provides a much lower gear than normal at the cost of a large gearing jump.
Half-step gearing
There are two chainrings whose relative difference (say 10%) is about half the relative step on the cogset (say 20%). This was used in the mid-20th century when front derailleurs could only handle a small step between chainrings and when rear cogsets only had a small number of sprockets, e.g. chainrings 44-48 and cogset 14-17-20-24-28. The effect is to provide two interlaced gear ranges without any duplication. However to step sequentially through the gear ratios requires a simultaneous front and rear shift on every other gear change. This style is not available off the shelf.
Half-step plus granny gearing
There are three chainrings with half-step differences between the larger two and multi-range differences between the smaller two, e.g. chainrings 24-42-46 and cogset 12-14-16-18-21-24-28-32-36. This general arrangement is suitable for touring with most gear changes being made using the rear derailleur and occasional fine tuning using the two large chainrings. The small chainring (granny gear) is a bailout for handling steeper hills, but it requires some anticipation in order to use it effectively.
Internal (hub)
Internal gearing is so called because all the gears involved are hidden within a wheel hub. Hub gears work using internal planetary, or epicyclic, gearing which alters the speed of the hub casing and wheel relative to the speed of the drive sprocket. They have just a single chainring and a single rear sprocket, almost always with a straight chain path between the two. Hub gears are available with between 2 and 14 speeds; weight and price tend to increase with the number of gears. All the advertised speeds are available as distinct gear ratios controlled by a single shifter (except for some early 5-speed models which used two shifters). Hub gearing is often used for bicycles intended for city-riding and commuting.
Current systems have a 2-speed hub gear incorporated in the crankset or bottom bracket. Patents for such systems appeared as early as 1890. The Schlumpf Mountain Drive and Speed Drive have been available since 2001 and offer direct drive plus one of three variants (reduction 1:2.5, increase 1.65:1, and increase 2.5:1). Changing gears is accomplished by using your foot to tap a button protuding on each side of the bottom bracket spindle. The effect is that of having a bicycle with twin chainrings with a massive difference in sizes. Pinion GmbH introduced in 2010 an 18 speed model, offering an evenly spaced 636% range.
Internal and external combined
It is sometimes possible to combine a hub gear with deraileur gears. There are several commercially available possibilities:
- One standard option for the Brompton folding bicycle is to use a 3-speed hub gear (roughly a 30% difference between gear ratios) in combination with a 2-speed deraileur gear (roughly a 15% difference) to give 6 distinct gear ratios; this is an example of half-step gearing. Some Brompton suppliers offer a 2-speed chainring 'Mountain Drive' as well, which results in 12 distinct gear ratios with a range exceeding 5:1; in this case, the change from 6th to 7th gear involves changing all three sets of gears simultaneously.
- The SRAM DualDrive system uses a standard 8 or 9-speed cogset mounted on a three-speed internally geared hub, offering a similar gear range to a bicycle with a cogset and triple chainrings.
- Less common is the use of a double or triple chainring in conjunction with an internally geared hub, extending the gear range without having to fit multiple sprockets to the hub. However, this does require a chain tensioner of some sort, negating some of the advantages of hub gears.
- At an extreme opposite from a single speed bicycle, hub gears can be combined with both front and rear derailleurs, giving a very wide-ranging drivetrain at the expense of weight and complexity of operation- there are a total of three sets of gears. This approach may be suitable for recumbent trikes, where very low gears can be used without balance issues, and the aerodynamic position allows higher gears than normal.
There have been, and still are, some quite different methods of selecting a different gear ratio:
- Retro-direct drivetrains used on some early 20th century bicycles have been resurrected by bicycle hobbyists. These have two possible gear ratios but no gear lever; the operator simply pedals forward for one gear and backward for the other. The chain path is quite complicated, since it effectively has to do a figure of eight as well as follow the normal chain path.
- Flip-flop hubs have a double-sided rear wheel with a (different sized) sprocket on each side. To change gear: stop, remove the rear wheel, flip it over, replace the wheel, adjust chain tension, resume cycling. Current double sided wheels typically have a fixed sprocket on one side and a freewheel sprocket on the other.
- Prior to 1937 this was the only permitted form of gear changing on the Tour de France. Competitors could have 2 sprockets on each side of the rear wheel, but still had to stop to manually move the chain from one sprocket to the other and adjust the position of the rear wheel so as to maintain the correct chain tension.
- Continuously variable transmissions are a relatively new development in bicycles (though not a new idea). Mechanisms like the NuVinci gearing system use balls connected to two disks by static friction - changing the point of contact changes the gear ratio.
- Automatic transmissions have been demonstrated and marketed for both derailleur and hub gear mechanisms, often accompanied by a warning to disengage auto-shifting if standing on the pedals. These have met with limited market success.
The numbers in this section apply to the efficiency of the drive-train, including means of transmission and any gearing system. In this context efficiency is concerned with how much power is delivered to the wheel compared with how much power is put into the pedals. For a well-maintained transmission system, efficiency is generally between 86% and 99%, as detailed below.
Factors besides gearing which affect performance include rolling resistance and air resistance:
- Rolling resistance can vary by a factor of 10 or more depending on type and dimensions of tire and the tire pressure.
- Air resistance increases greatly as speed increases and is the most significant factor at speeds above 10 to 12 miles (15 to 20 km) per hour (the drag force increases in proportion to the square of the speed, thus the power required to overcome it increases in proportion to the cube of the speed).
Human factors can also be significant. Rohloff demonstrates that overall efficiency can be improved in some cases by using a slightly less efficient gear ratio when this leads to greater human efficiency (in converting food to pedal power) because a more effective pedalling speed is being used.
An encyclopedic overview can be found in Chapter 9 of "Bicycling Science" which covers both theory and experimental results. Some details extracted from these and other experiments are provided in the next subsection, with references to the original reports.
Factors which have been shown to affect the drive-train efficiency include the type of transmission system (chain, shaft, belt), the type of gearing system (fixed, derailleur, hub, infinitely variable), the size of the sprockets used, the magnitude of the input power, the pedalling speed, and how rusty the chain is. For a particular gearing system, different gear ratios generally have different efficiencies.
Some experiments have used an electric motor to drive the shaft to which the pedals are attached, while others have used averages of a number of actual cyclists. It is not clear how the steady power delivered by a motor compares with the cyclic power provided by pedals. Rohloff argues that the constant motor power should match the peak pedal power rather than the average (which is half the peak).
There is little independent information available relating to the efficiency of belt drives and infinitely variable gear systems; even the manufacturers/suppliers appear reluctant to provide any numbers.
Derailleur type mechanisms of a typical mid-range product (of the sort used by serious amateurs) achieve between 88% and 99% mechanical efficiency at 100W. In derailleur mechanisms the highest efficiency is achieved by the larger sprockets. Efficiency generally decreases with smaller sprocket and chainring sizes. Derailleur efficiency is also compromised with cross-chaining, or running large-ring to large-sprocket or small-ring to small-sprocket. This cross-chaining also results in increased wear because of the lateral deflection of the chain.
Chester Kyle and Frank Berto reported in "Human Power" 52 (Summer 2001) that testing on three derailleur systems (from 4 to 27 gears) and eight gear hub transmissions (from 3 to 14 gears), performed with 80W, 150W, 200W inputs, gave results as follows:
|Transmission Type||Efficiency (%)|
Efficiency testing of bicycle gearing systems is complicated by a number of factors - in particular, all systems tend to be better at higher power rates. 200 Watts will drive a typical bicycle at 20 mph, while top cyclists can achieve 400W, at which point one hub-gear manufacturer (Rohloff) claims 98% efficiency.
At a more typical 150W, hub-gears tend to be around 2% less efficient than a well-lubricated derailleur.
See also
- Ed Pavelka (1999). Bicycling magazine's training techniques for cyclists: greater power, faster. Rodale Press. pp. 4–5. "There are lots of cyclists who have suffered debilitating trauma from pushing too big a gear....benefits of spinning begin to disappear above 100 rpm."
- "Gain Ratios; a new way to think about bicycle gears". Retrieved 2011-05-25.
- "Cyclists Touring Club: internal gear ratios". Archived from the original on 2 July 2011. Retrieved 2011-06-29.
- "Cycling Cadence and Bicycle Gearing". Retrieved 2011-07-18.
- "Internal Gear Hub Review". Archived from the original on 18 July 2011. Retrieved 2011-07-20.
- "What Kind Of Drive The Cyclist Needs". Archived from the original on 3 July 2011. Retrieved 2011-07-20.
- "Derailleur Gears: A practical guide to their use and operation.". Retrieved 2011-06-27.
- Mike Levy (Aug 30, 2011). "Pinion 18 speed Gearbox - Eurobike 2011". PinkBike.com. Retrieved 2011-09-12.
- "Gear Theory for Bicyclists". Archived from the original on 10 June 2011. Retrieved 2011-06-20.
- Trek Bicycle Corporation (1983). "Trek 620". Vintage-Trek. Retrieved 2012-08-08. "Crankset: Sugino Aero Mighty Tour Forged Alloy Triple 28-45-50. Freewheel: Atom Helicomatic 6-spd 13-28 (13/14/17/20/24/28)"
- Berto, Frank (2010). The Dancing Chain (Third ed.). Van der Plas Publications. pp. 39–47. ISBN 978-1-892495-59-4.
- Peter Eland (Monday 12 Aug 2002). "Schlumpf announces new High Speed Drive". Velo Vision. Retrieved 2011-05-17.
- "Pinion P1.18". Pinion GmbH.
- "1937 Tour de France". Retrieved 2011-06-23.
- "Rolling Resistance of Bike Tires". Archived from the original on 17 July 2011. Retrieved 2011-07-20.
- "Bicycle efficiency and power -- or, why bikes have gears". Retrieved 2011-07-20.
- "Efficiency measurement of bicycle transmission". Retrieved 2011-07-22.
- Wilson, David G.; J Papadopuolos (2004). Bicycling Science (Third ed.). Massachusetts Institute of Technology. pp. 311–352. ISBN 0-262-73154-1.
- Whitt, Frank R.; David G. Wilson (1982). Bicycling Science (Second edition ed.). Massachusetts Institute of Technology. pp. 277–300. ISBN 0-262-23111-5.
- "The mechanical efficiency of bicycle derailleur and hub-gear transmissions". Archived from the original on 25 July 2011. Retrieved 2011-07-18.
- "Efficiency Measurements of Bicycle Transmissions" Bernhard Rohloff and Peter Greb (translated by Thomas Siemann) 2004. Rohloff's testing "at 400 watts, double what we did and found efficiencies approaching 98%".
- "Efficiency Measurements of Bicycle Transmissions" Bernhard Rohloff and Peter Greb (translated by Thomas Siemann) 2004. "In our article we therefore concluded that hub gears are about 2% less efficient that derailleur transmissions under typical field conditions. We see no reason to change that conclusion.". | http://en.wikipedia.org/wiki/Bicycle_gearing | 13 |
120 | A positive number, a negative number or zero. The concept of a real number arose by a generalization of the concept of a rational number. Such a generalization was rendered necessary both by practical applications of mathematics — viz., the expression of the value of a given magnitude by a definite number — and by the internal development of mathematics itself; in particular, by the desire to extend the domain of applicability of certain operations on numbers (root extraction, computation of logarithms, solution of equations, etc.). The general concept of real number was already studied by Greek mathematicians of Antiquity in their theory of non-commensurable segments, but it was formulated as an independent concept only in the 17th century by I. Newton, in his Arithmetica Universalis as follows: "A number is not so much the totality of several units, as an abstract ratio between one magnitude and another, of the same kind, and which is accepted as a unit" . Rigorous theories of real numbers were constructed at the end of the 19th century by K. Weierstrass, G. Cantor and R. Dedekind.
Real numbers form a non-empty totality of elements which contains more than one element and displays the following properties.
I) The property of being ordered. Any two numbers and have a definite order relation, i.e. one and only one of the following relations will be true: , or ; also, if and , then (transitivity of the order).
II) The property of an addition operation. For any ordered pair of numbers and there is a unique number, known as their sum and denoted by , such that the following properties hold: ) (commutativity); ) for any numbers , and one has (associativity); ) there exists a number, called zero and denoted by , such that for any ; ) for any number there exists a number, called the opposite of and denoted by , such that ; ) if , then for any .
The zero is unique, and the number opposite to any given number is unique. For any ordered pair of numbers and the number is called the difference between the numbers and and is denoted by .
III) The property of a multiplication operation. For any ordered pair of numbers and there exists a unique number, known as their product and denoted by , such that: ) (commutativity); ) for any numbers (associativity); ) there exists a number, known as the unit and denoted by , such that for any number ; ) for any non-zero number there exists a number, known as its reciprocal and denoted by , such that ; ) if and , then .
These properties ensure that the unit and the reciprocal of each element are unique. For each ordered pair of numbers and , , the number is known as the quotient obtained by dividing by ; it is denoted by .
The number is denoted by , the number is denoted by , etc. The numbers are known as the natural numbers (cf. Natural number). Numbers larger than zero are said to be positive, while numbers smaller than zero are said to be negative. The numbers are called integers (it is assumed that ; cf. Integer). Numbers of the type , where is an integer, while is a natural number, are known as rational numbers or fractions. They include all integers. The number is identified with . Real numbers which are not rational are also called irrational numbers.
IV) The property of distributivity of multiplication with respect to addition. For any three numbers , and , .
V) The Archimedean property. For any number there exists an integer such that . A totality of elements having properties I–V forms an Archimedean ordered field. Examples are not only the field of real numbers, but also the field of rational numbers.
An important property of real numbers is their continuity; rational numbers do not have this property.
VI) The property of continuity. For any system of nested segments
there exists at least one number which belongs to all the segments of the system. This property is also known as Cantor's principle of nested segments. If the lengths of the nested segments tend to zero as , there exists a unique point which belongs to all these segments.
The properties of real numbers listed above entail many others; thus, it follows from the properties I to V that ; there also follow the rules of operations on rational fractions, the sign rules to be observed when multiplying and dividing real numbers, the properties of the absolute value of a real number, the rules governing transformations of equalities and inequalities, etc. Properties I to VI are a complete description of the properties of the field of real numbers and only of this field; in other words, if these properties are taken as axioms, it follows that the real numbers form the unique totality of elements satisfying them. This means that properties I to VI define the set of real numbers up to an isomorphism: If there are two sets and satisfying the properties I to VI, there always exists a mapping of onto , isomorphic with respect to the order and to the operations of addition and multiplication, i.e. this mapping (denoted , where is the element corresponding to the element ) maps onto in a one-to-one correspondence so that if
A consequence of this is that the field of real numbers (as distinct, for example, from the field of rational numbers) cannot be extended while preserving the properties I to V, i.e. there is no field with the property of being ordered and with addition and multiplication operations in accordance with properties I to V, which would contain a subset isomorphic to the field of real numbers without being identical with it.
There are many more real numbers than rational numbers; in fact, the rational numbers form a countable subset of the set of real numbers, which is itself uncountable (cf. Cardinality). Both the rational and the irrational numbers are dense in the set of all real numbers (cf. Dense set): For any two real numbers and , , it is possible to find a rational number such that and an irrational number such that .
The property of continuity of real numbers is closely connected with the property of their completeness, to wit, that any fundamental sequence of real numbers is convergent. It should be noted that the field of rational numbers only is no longer complete: It contains fundamental sequences which do not converge to any rational number. The continuity (or completeness) of the set of real numbers is closely connected with their utilization in measuring several kinds of continuous magnitudes, e.g. in determining the length of geometrical segments; if a unique unit segment is chosen, then, in view of the continuity of the set of real numbers, it is possible to bring any segment into correspondence with a positive real number — its length. The continuity of the set of real numbers may be described, in an illustrative manner, by saying that it contains no "empty spaces" . A consequence of the continuity of the set of real numbers is the fact that it is possible to extract the -th root of any positive number (where is a natural number), and the fact that any positive number has a logarithm to any base , , .
The property of continuity of real numbers may also be formulated in a different manner.
VI') Any non-empty set bounded from above has a least upper bound (cf. Upper and lower bounds).
The concept of a cut in the domain of real numbers (cf. Dedekind cut) may also be employed. One says that the cut is effected by the number if for all , (here, either or ). Any number effects a cut.
The property of continuity, known as the Dedekind continuity of the real numbers, consists in the validity of the converse postulate.
VI) Any cut of real numbers is effected by some number. Such a number is unique, and is either the highest in the lower class or the lowest in the higher class.
Each one of the postulates VI, VI' and VI is equivalent to each one of the others, in the sense that if any one of them, as well as the remaining properties I to V, is taken as axiom, the other two will follow. Moreover, both property VI' and property VI (in conjunction with properties I–IV) entail not merely VI, but also the Archimedean property V. The definition of the set real numbers as the non-empty totality of elements with properties I–VI is an axiomatic construction of the theory of real numbers. Several methods of constructing this theory on the base of rational numbers are available.
The first such theory was constructed by Dedekind on the basis of the concept of a cut in the domain of rational numbers. If, for a given cut there exists a largest rational number in or a smallest rational number in , one says that the cut is effected by this number. Any rational number effects a cut. A cut for which there is no largest number in the lower class, and no smallest number in the upper class, is said to be an irrational number. Rational and irrational numbers are called real numbers; here, for the sake of uniformity, rational numbers are considered as the cuts which they effect.
Let and . The real number is said to be smaller than the real number (or, which is the same thing, is said to be larger than ) if , . The concepts of positive and negative real numbers (see above) and of the absolute value of a real number are introduced in the usual way. The sum of the real numbers and is defined to be the number such that for all , , , the inequalities
are valid. The product of two positive real numbers and is the number such that for all positive the inequalities are satisfied. The product of two non-zero real numbers and is defined as the real number whose absolute value is , and which is positive if and have the same sign, and negative if they have opposite signs. Finally, for any real number it is assumed that .
The sum and product of real numbers always exist, are unique, and the totality of real numbers thus defined, together with the introduced order and operations of addition and multiplication, displays the properties I–VI.
Another theory was proposed by G. Cantor. It is based on the concept of a fundamental sequence of rational numbers, i.e. a sequence of rational numbers such that for any rational number there exists a number such that for all and the inequality is valid. A sequence of rational numbers is said to be a zero-sequence if for any rational number there exists a number such that for all the inequality is valid. Two fundamental sequences of rational numbers and are said to be equivalent if the sequence is a zero-sequence. This definition of equivalence displays the properties of reflexivity, symmetry and transitivity, and this is the reason why the whole set of fundamental sequences of rational numbers splits into equivalence classes. The totality of all these equivalence classes is also known in this case as the set of real numbers. By virtue of this definition, any real number represents an equivalence class of fundamental sequences of rational numbers. Each such sequence is said to be a representative of the given real number. A fundamental sequence of rational numbers is said to be positive (negative) if there exists a rational number such that all terms of this sequence, beginning with some term, are larger than (smaller than ). Any fundamental sequence of rational numbers is either a zero-sequence, a positive sequence or a negative sequence. If a fundamental sequence of rational numbers is positive (negative), then any fundamental sequence of rational numbers equivalent to it will also be positive (negative). A real number is said to be positive (negative) if some one (and hence any one) of its representatives is positive (negative). A real number is said to be zero if some one (and hence any one) of its representatives is a zero-sequence. Any real number is either positive, negative or zero. In order to add or to multiply two real numbers and one has to add (respectively, multiply) any two of their representatives , ; this again yields fundamental sequences of rational numbers, and . The equivalence classes which they represent are known in this case as the sum and the product of these numbers. These operations are unambiguously defined, i.e. they do not depend on the choice of representatives of these numbers. Subtraction and division of real numbers are defined as the operations inverse to addition and multiplication, respectively. If, for two real numbers and , one has , the real number is said to be larger than the real number . The totality of real numbers thus defined, together with the property of ordering described above and the operations of addition and multiplication, again displays the properties I–VI.
Still another theory, based on infinite decimal expansions, was developed by Weierstrass. According to this theory, a real number is any infinite decimal expansion with a plus or a minus sign:
where is a non-negative integer (integers are assumed to be given) while each , is one of the digits . Here, an infinite decimal expansion which after some time consists of 9's only (i.e. which has a period consisting of 9:
is considered equal to the infinite decimal expansion
(if , it is equal to the infinite decimal expansion ). This expansion may also be written as the finite decimal expansion
and one says that it has significant figures after the decimal point. An infinite decimal expansion without the period
is said to be an allowable infinite decimal expansion. Clearly, any real number can be uniquely (re)written as an allowable infinite decimal expansion. If a real number is rewritten as an allowable infinite decimal expansion with a plus (minus) sign, and if the digits contain at least one non-zero digit, is said to be positive (negative), written as (). If all , it is said to be zero: . For the number
the number is said to be its absolute value and is denoted by . The number with the plus sign (the minus sign) replaced by the minus sign (the plus sign) is said to be opposite to the given number and is denoted by . If
is an allowable infinite decimal expansion, then the numbers
are said to be, respectively, the lower (higher) decimal approximation of order of the number . Let and be two positive numbers, written as allowable infinite decimal expansions
By definition, if either or if there exists a number , such that , , but . Every negative number and zero are considered to be smaller than every positive number. If and are both negative and , then .
A sequence of integers , is said to be stabilizing to a number if there exists a number such that for all . A sequence of infinite decimal expansions
is said to be stabilizing to a number
if the -th column of the infinite matrix , where is the column index and is the row index, stabilizes to the number for any . If and , the finite decimal expansions
have significant figures to the right of the decimal point and form sequences stabilizing to certain numbers. These numbers are known, respectively, as the sum , the difference , the product , and the quotient of and . These definitions are extended to real numbers of arbitrary sign. For instance, if and , then ; if the signs of and are different, then , the sign of this result being identical with the sign of that number or which has the larger absolute value. For any numbers and it is assumed that (if , , this definition is identical with that given above), etc. The totality of allowable infinite decimal expansions with the order relation and with the operations of addition, subtraction, multiplication, and division thus defined, satisfies the axioms I–VI.
In constructing the theory of real numbers it is also possible to use non-decimal computation systems, i.e. systems to the base two, three, etc. It is important to note that none of the constructions of the theory of real numbers given above (axiomatic, based on cuts of rational numbers, based on fundamental sequences of rational numbers or on infinite decimal expansions) is a proof of the existence (self-consistency) of the set of real numbers. From this point of view all these methods are equivalent.
Geometrically, the set of real numbers can be represented by an oriented (directed) straight line, while the individual numbers are represented by points on that line. Accordingly, the totality of real numbers is often called the number axis, while the individual numbers are called points. When such a representation of real numbers is employed, instead of saying that is smaller than (respectively, that is larger than ) one says that the point lies to the left of the point (respectively, lies to the right of ). There is an order-preserving one-to-one correspondence between the points on a Euclidean straight line ordered in accordance with their locations on it and the elements of the number axis. This is a justification for representing the set of real numbers as a straight line.
|||R. Dedekind, "Essays on the theory of numbers" , Dover, reprint (1963) (Translated from German)|
|||V. Dantscher, "Vorlesungen über die Weierstrass'sche Theorie der irrationalen Zahlen" , Teubner (1908)|
|||G. Cantor, "Ueber die Ausdehnung eines Satzes aus der Theorie der trigonometrischen Reihen" Math. Ann. , 5 (1872) pp. 123–130|
|||V.V. Nemytskii, M.I. Sludskaya, A.N. Cherkasov, "A course of mathematical analysis" , 1 , Moscow (1957)|
|||V.A. Il'in, E.G. Poznyak, "Fundamentals of mathematical analysis" , 1–2 , MIR (1982) (Translated from Russian)|
|||L.D. Kudryavtsev, "A course in mathematical analysis" , 1 , Moscow (1988) (In Russian)|
|||S.M. Nikol'skii, "A course of mathematical analysis" , 1–2 , MIR (1977) (Translated from Russian)|
|||G.M. Fichtenholz, "Differential und Integralrechnung" , 1 , Deutsch. Verlag Wissenschaft. (1964)|
|||N. Bourbaki, "General topology" , Elements of mathematics , Addison-Wesley (1966) pp. Chapts. 3–4 (Translated from French)|
The most important theory of proportions in Antiquity has been given by Eudoxus of Cnidus (ca. 400 B.C.– 347 B.C.). One can find this theory in Euclid's Elements, book V. See also [a2], [a3] and of Euclid.
Irrational numbers can be divided into two different kinds: algebraic numbers and transcendental numbers. An algebraic number is a root of an algebraic equation with (rational) integers as coefficients. A transcendental number is not the root of any algebraic equation with (rational) integral coefficients. The usual notation for the field of rational (respectively, real) numbers is (respectively, ).
|[a1]||T.L. Heath, "A history of Greek mathematics" , Dover, reprint (1981)|
|[a2]||T.L. Heath, "The thirteen books of Euclid's elements" , 1–3 , Dover, reprint (1956) ((Translated from the Greek))|
|[a3]||W.R. Knorr, "The evolution of the Euclidean elements" , Reidel (1975)|
|[a4]||E. Landau, "Foundations of analysis" , Chelsea, reprint (1951) (Translated from German)|
|[a5]||W. Rudin, "Principles of mathematical analysis" , McGraw-Hill (1976) pp. 75–78|
|[a6]||H. Gericke, "Geschichte des Zahlbegriffs" , B.I. Wissenschaftsverlag Mannheim (1970)|
Real number. L.D. Kudryavtsev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Real_number&oldid=13567 | http://www.encyclopediaofmath.org/index.php/Real_number | 13 |
50 | Let’s start this section off with a quick discussion on what
vectors are used for. Vectors are used
to represent quantities that have both a magnitude and a direction. Good examples of quantities that can be
represented by vectors are force and velocity.
Both of these have a direction and a magnitude.
Let’s consider force for a second. A force of say 5 Newtons that is applied in a particular
direction can be applied at any point in space.
In other words, the point where we apply the force does not change the
force itself. Forces are independent of
the point of application. To define a
force all we need to know is the magnitude of the force and the direction that
the force is applied in.
The same idea holds more generally with vectors. Vectors only impart magnitude and
direction. They don’t impart any
information about where the quantity is applied. This is an important idea to always remember
in the study of vectors.
In a graphical sense vectors are represented by directed
line segments. The length of the line
segment is the magnitude of the vector and the direction of the line segment is
the direction of the vector. However,
because vectors don’t impart any information about where the quantity is applied
any directed line segment with the same length and direction will represent the
Consider the sketch below.
Each of the directed line segments in the sketch represents
the same vector. In each case the vector
starts at a specific point then moves 2 units to the left and 5 units up. The notation that we’ll use for this vector
and each of the directed line segments in the sketch are
called representations of the
Be careful to distinguish vector notation, ,
from the notation we use to represent coordinates of points, . The vector denotes a magnitude and a
direction of a quantity while the point denotes a location in space. So don’t mix the notations up!
A representation of the vector in two dimensional space is any directed line
from the point to the point . Likewise a representation of the vector in three dimensional space is any directed
line segment, ,
from the point to the point .
Note that there is very little difference between the two
dimensional and three dimensional formulas above. To get from the three dimensional formula to
the two dimensional formula all we did is take out the third
component/coordinate. Because of this
most of the formulas here are given only in their three dimensional
version. If we need them in their two
dimensional form we can easily modify the three dimensional form.
There is one representation of a vector that is special in
some way. The representation of the
vector that starts at the point and ends at the point is called the position vector of the point . So, when we talk about position vectors we
are specifying the initial and final point of the vector.
Position vectors are useful if we ever need to represent a
point as a vector. As we’ll see there
are times in which we definitely are going to want to represent points as
vectors. In fact, we’re going to run
into topics that can only be done if we represent points as vectors.
Next we need to discuss briefly how to generate a vector
given the initial and final points of the representation. Given the two points and the vector with the representation is,
Note that we have to be very careful with direction
here. The vector above is the vector
that starts at A and ends at B.
The vector that starts at B
and ends at A, i.e. with representation is,
These two vectors are different and so we do need to always
pay attention to what point is the starting point and what point is the ending
point. When determining the vector between two points we always subtract the
initial point from the terminal point.
Example 1 Give
the vector for each of the following.
vector from to .
vector from to .
position vector for
that to construct this vector we subtract coordinates of the starting point
from the ending point.
(b) Same thing
Notice that the only difference between the first two is
the signs are all opposite. This
difference is important as it is this difference that tells us that the two
vectors point in opposite directions.
(c) Not much to
this one other than acknowledging that the position vector of a point is
nothing more than a vector with the point's coordinates as its components.
We now need to start discussing some of the basic concepts
that we will run into on occasion.
We also have the following fact about the magnitude.
This should make sense.
Because we square all the components the only way we can get zero out of
the formula was for the components to be zero in the first place.
Any vector with magnitude of 1, i.e. ,
is called a unit vector.
Example 3 Which
of the vectors from Example 2 are unit vectors?
Both the second and fourth vectors had a length of 1 and
so they are the only unit vectors from the first example.
The vector that we saw in the first example is called a
zero vector since its components are all zero.
Zero vectors are often denoted by . Be careful to distinguish 0 (the number) from
The number 0 denotes the origin in space, while the vector denotes a vector that has no magnitude or
The fourth vector from the second example, ,
is called a standard basis vector. In three dimensional space there are three
standard basis vectors,
In two dimensional space there are two standard basis
Note that standard basis vectors are also unit vectors.
We are pretty much done with this section however, before
proceeding to the next section we should point out that vectors are not
restricted to two dimensional or three dimensional space. Vectors can exist in general n-dimensional
space. The general notation for a
n-dimensional vector is,
and each of the ai’s
are called components of the
Because we will be working almost exclusively with two and
three dimensional vectors in this course most of the formulas will be given for
the two and/or three dimensional cases.
However, most of the concepts/formulas will work with general vectors
and the formulas are easily (and naturally) modified for general n-dimensional
vectors. Also, because it is easier to
visualize things in two dimensions most of the figures related to vectors will
be two dimensional figures.
So, we need to be careful to not get too locked into the two
or three dimensional cases from our discussions in this chapter. We will be working in these dimensions either
because it’s easier to visualize the situation or because physical restrictions
of the problems will enforce a dimension upon us. | http://tutorial.math.lamar.edu/Classes/CalcII/Vectors_Basics.aspx | 13 |
95 | The Coriolis effect is an apparent deflection of a moving object in a rotating frame of reference. The effect is named after Gaspard-Gustave Coriolis, a French scientist, who discussed it in 1835, though the mathematics appeared in the tidal equations of Laplace in 1778.
The formula for the Coriolis acceleration is
where (here and below) v is the velocity in the rotating system, ω is the angular velocity (the rotation rate and orientation) of the rotating system. The equation may be multiplied by the mass of the relevant object to produce the Coriolis force. See Fictitious force for a derivation.
Note that this is vector multiplication. In non-vector terms: At a given rate of rotation of the observer, the magnitude of the Coriolis acceleration of the object will be proportional to the velocity of the object and also to the sine of the angle between the direction of movement of the object and the axis of rotation.
The Coriolis effect is the behavior added by the Coriolis acceleration. The formula implies that the Coriolis acceleration is perpendicular both to the direction of the velocity of the moving mass and to the rotation axis. So in particular:
- if the velocity (as always, in the rotating system) is zero, the Coriolis acceleration is zero
- if the velocity is parallel to the rotation axis, the Coriolis acceleration is zero
- if the velocity is straight (perpendicularly) inward to the axis, the acceleration will follow the direction of rotation
- if the velocity is following the rotation, the acceleration will be (perpendicularly) outward from the axis
When considering atmospheric dynamics, the Coriolis acceleration (strictly a 3-d vector in the formula above) appears only in the horizontal equations due to the neglect of products of small quantities and other approximations. The term that appears is then
where k is a unit local vertical, f = 2ωsin(latitude) is called the Coriolis parameter and (u,v) are the horizontal components of the velocity.
There are essentially two ways to understand the Coriolis effect.
The first is to simply say that in a rotating frame of reference a force, given by the formula above, will be experienced by an object moving in that frame of reference. For the case of an object moving on the surface of the earth in the northern hemisphere, the formula shows that the object experiences a force proportional to its speed, directed at right angles to its path, pushing it to the right. This has the advantage of being simple, easy to understand, and mathematically always giving the correct answer. Nonetheless many people find it lacking as a "physical explanation" and seek another way to "understand" the Coriolis force.
The other way is to picture oneself outside the rotating frame, looking at objects moving in the rotating frame. For concreteness, consider objects moving nearly parallel to the surface of the earth - a cannon ball fired horizontally, for example.
Suppose the cannon is at, say, 45 degrees north, and is fired facing due north. The earth is rotating towards the east; at the instant of firing, the cannon ball shares in this rotation. Further north, the earth is rotating more slowly (the angular velocity is the same, but the rotation rate of the surface is the angular velocity times the distance from the polar axis, and hence decreases from equator to pole). Hence, as the ball travels north the earth underneath is rotating more slowly and the cannon ball, relative to this, is seen to be moving towards the east; this is then equivalent to a force pushing it towards the east.
Some care is needed in formalising this argument; a direct formalisation leads to a Coriolis effect of only half the correct magnitude.
The scale of the effect can be seen by taking the distance from the North Pole to the Equator, which is a quarter of the circumference, roughly 10,000 kilometers. The missile, or any other object, must gain 1670 km/h divided by 10,000 = 0.167 km/h on average for each kilometer travelled south. This small difference in speed for each kilometer travelled is too small to be noticed by people attached to the surface of the earth. However weather systems occupy large areas and are not attached to the surface of the earth. Consequently the Coriolis effect is an important factor in meteorology.
Like the missile in the Northern Hemisphere, a mass of air travelling south has less speed than the air at its destination and so appears to be deflected west. Conversely air travelling north has excess speed and appears to be deflected east. As the wind blows from all sides to fill an area of low pressure, the Coriolis effect therefore creates rotation around the low pressure system. The winds around areas of low pressure circulate counter-clockwise in the Northern Hemisphere, while in the Southern Hemisphere this circulation is clockwise. The rotation produces the characteristic swirls that can be seen on satellite photographs of weather systems, and of hurricanes in particular.
A common fallacy is that the Coriolis effect affects the rotation of water flowing through plug-holes; see below.
Although the rotation of the Earth provides the most obvious examples of the Coriolis effect, it arises in other rotating systems. For example, someone standing at the center of a rotating carousel could throw a ball to someone standing at the edge. If thrown without making allowance for the Coriolis effect, the ball would not reach its target.
Flow around a low pressure area
If a low pressure area forms in the atmosphere, air will tend to flow in towards it, but will be deflected perpendicular to its velocity by the Coriolis acceleration. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow.
The force balance is largely between the pressure gradient force acting towards the low-pressure area and the Coriolis acceleration acting away from the center of the low pressure. Instead of flowing down the gradient, the air tends to flow perpendicular to the air-pressure gradient and forms a cyclonic flow. This is an example of a more general case of geostrophic flow in which air flows along isobars. On a non-rotating planet the air would flow along the straightest possible line, quickly leveling the air pressure. Note that the force balance is thus very different from the case of "inertial circles" (see below) which explains why mid-latitude cycles are larger by an order of magnitude than inertial circle flow would be.
This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. The pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is counterclockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. Cyclones cannot form on the equator, and they rarely travel towards the equator, because in the equatorial region the coriolis parameter is small, and exactly zero on the equator.
People often ask whether the Coriolis effect determines the direction in which bathtubs or toilets drain, and whether water always drains in one direction in the Northern Hemisphere, and in the other direction in the Southern Hemisphere. The answer is almost always no. The Coriolis effect is a few orders of magnitude smaller than other random influences on drain direction, such as the geometry of the sink, toilet, or tub; whether it is flat or tilted; and the direction in which water was initially added to it. If one takes great care to create a flat circular pool of water with a small, smooth drain; to wait for eddies caused by filling it to die down; and to remove the drain from below (or otherwise remove it without introducing new eddies into the water) – then it is possible to observe the influence of the Coriolis effect in the direction of the resulting vortex. There is a good deal of misunderstanding on this point, as most people (including many scientists) do not realize how small the Coriolis effect is on small systems.Template:Fn This is less of a puzzle once one remembers that the earth revolves once per day but that a bathtub takes only minutes to drain. The increase in rotational speed around the plug hole is because water is being drawn towards the plughole and so its radius of its mass to the point it is spinning around decreases so its rate of rotation increases from the low background level to a noticeable spin in order to conserve its angular momentum (the same effect as bringing one's arms in on a swivel chair making it spin faster)
The time and space scales are important in determining the importance of the Coriolis effect. Weather systems are large enough to feel the curvature of the earth and generally rotate less than once a day so a similar timescale to the earth's rotation so the Coriolis effect is dominant. An unguided missile, if fired far enough, will travel far enough and be in the air long enough to notice the effect but the dominant effect is the direction it was fired in. Long range shells landed close to, but to the right of where they were aimed until this was noted (or left if they were fired in the southern hemisphere, though most weren't). You don't worry about which hemisphere you're in when playing catch in the garden though this is exactly the same physics at a smaller scale. A bathtub is best approximated (in terms of scale) by a game of catch.
Coriolis flow meter
A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate of a fluid through a tube. The operating principle was introduced in 1977 by Micro Motion Inc. Simple flow meters measure volume flow rate, which is proportional to mass flow rate only when the density of the fluid is constant. If the fluid has varying density, or contains bubbles, then the volume flow rate multiplied by the density is not an accurate measure of the mass flow rate. The Coriolis mass flow meter operating principle essentially involves rotation, though not through a full circle. It works by inducing a vibration of the tube through which the fluid passes, and subsequently monitoring and analysing the inertial effects that occur in response to the combination of the induced vibration and the mass flow.
In firing projectiles over a significant distance, the rotation of the Earth must be taken into account. During its flight, the projectile moves in a straight line (not counting gravitation and air resistance for now). The target on the ground, co-rotating with the Earth, is a moving target, so the gun must be aimed not directly at the target, but at a point where the projectile and the target will arrive simultaneously.
Also see: ballistics.
In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects will therefore be present and will cause the atoms to move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels.
Visualisation of the Coriolis effect
To demonstrate the Coriolis effect, a turntable can be used. If the turntable is flat, then the centrifugal force, which always acts outwards from the rotation axis, would lead to objects being forced out off the edge. If the turntable has a bowl shape, then the component of gravity tangential to the bowl surface will tend to counteract the centrifugal force. If the bowl is parabolic, and spun at the appropriate rate, then gravity exactly counteracts the centrifugal force and the only net force (bar friction, which can be minimised) acting is then the Coriolis force. If the turntable is a dish with a rim and filled with liquid, then when the liquid is rotating it naturally assumes a parabolic shape (for the same reasons). If a liquid that sets after several hours is used, such as a synthetic resin, a permanent shape is obtained.
Disks cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing dynamic phenomena to show themselves. To also get a view of the motions as seen from a rotating point of view, a video-camera is attached to the turntable in such a way that the camera is co-rotating with the turntable. This type of setup, with a parabolic turntable, at the center about a centimeter deeper than the rim, is used at Massachusetts Institute of Technology (MIT) for teaching purposes. Template:Fn
If an object moves subject only to the Coriolis force, it will move in a circular trajectory called an 'inertial circle'.
In an inertial circle, the force balance is sometimes most easily understood as being between two fictitious forces, the centrifugal force (directed outwards) and the Coriolis force (directed inwards). The dynamics is thus quite different to mid-latitude cyclones or hurricanes, in which cases the force balance is between the pressure gradient force (directed inwards) and the Coriolis force (directed out). In particular, this means that the direction of orbit is opposite to that of mid-latitude cyclones.
The frequency of these oscillations is given by f, the coriolis parameter; and their radius by :
- v / f,
where v is the velocity of the air mass. On the Earth, a typical mid-latitude value for f is 10-4; hence for a typical atmospheric speed of 10 m/s the radius is 100 km, with a period of about 14 hours. For a turntable rotating about once every 6 seconds, f is one, hence the radius of the circles, in cm, is numerically the same as the speed, in cm/s.
The centrifugal force is v2 / r and the Coriolis force vf, hence the forces balance when v2 / r = vf, i.e. v / f = r, giving the expression above for the radius of the circles.
If the rotating system is a turntable, then f is constant and the trajectories are exact circles. On a rotating planet, f varies with latitude and the circles do not exactly close.
Closer to the equator the component of the velocity towards or away from the Earth's axis is smaller, this component varies as sin(latitude), and this is taken into account in the parameter f. For a given velocity the oscillations are smallest at the poles as shown by the picture and would increase indefinitely at the equator, except the dynamics ceases to apply close to the equator. On a rotating planet the oscillations are only approximately circular and do not form closed loops as indicated in the simplified picture.
Physics and meteorology references
- Gill, AE 'Atmospher-Ocean dynamics, Academic Press, 1982.
- Durran, D. R., 1993: Is the Coriolis force really responsible for the inertial oscillation? Bull. Amer. Meteor. Soc., 74, 2179–2184; Corrigenda. Bulletin of the American Meteorological Society, 75, 261
1200 KB PDF-file of the above article
- Durran, D. R., and S. K. Domonkos, 1996: An apparatus for demonstrating the inertial oscillation. Bulletin of the American Meteorological Society, 77, 557–559.
1300 KB PDF-file of the above article
- Marion, Jerry B. 1970, Classical Dynamics of Particles and Systems, Academic Press.
- Persson, A., 1998 How do we Understand the Coriolis Force? Bulletin of the American Meteorological Society 79, 1373-1385.
374 KB PDF-file of the above article
- Symon, Keith. 1971, Mechanics, Addison-Wesley
- Norman Ph. A., 2000 An Explication of the Coriolis Effect. Bulletin of the American Meteorological Society: Vol. 81, No. 2, pp. 299–303.
200 KB PDF-file of the above article
- Grattan-Guinness, I., Ed., 1994: Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences. Vols. I and II. Routledge, 1840 pp.
1997: The Fontana History of the Mathematical Sciences. Fontana, 817 pp. 710 pp.
- Khrgian, A., 1970: Meteorology—A Historical Survey. Vol. 1. Keter Press, 387 pp.
- Kuhn, T. S., 1977: Energy conservation as an example of simultaneous discovery. The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, 66–104.
- Kutzbach, G., 1979: The Thermal Theory of Cyclones. A History of Meteorological Thought in the Nineteenth Century. Amer. Meteor. Soc., 254 pp.
- Template:FnbCoriolis effect misconceptions
- Template:FnbRotating turntable setup The concave turntable used at MIT for educational purposes.
- Template:FnbTaylor columns The counterintuitive behavior of a rotating fluid. Demonstration at MIT for educational purposes
- The Coriolis Effect PDF-file. 17 pages. A general discussion by Anders Persson of various aspects of the coriolis effect, including Foucault's Pendulum and Taylor columns.
- Coriolis Force - from ScienceWorld
da:Corioliseffekten de:Corioliskraft et:Coriolisi efekt es:Fuerza de Coriolis fr:Force de Coriolis gl:Forza de Coriolis ko:코리올리 효과 it:Forza di CoriolisTemplate:Link FA he:כוח קוריוליס nl:Corioliseffect ja:コリオリの力 pl:Efekt Coriolisa pt:Força de Coriolis ru:Сила Кориолиса sv:Corioliskraft zh:科里奥利力 | http://www.exampleproblems.com/wiki/index.php/Coriolis_effect | 13 |
123 | Polarization (also polarisation) is a property of waves that can oscillate with more than one orientation. Electromagnetic waves, such as light, and gravitational waves exhibit polarization; sound waves in a gas or liquid do not have polarization because the medium vibrates only along the direction in which the waves are travelling.
By convention, the polarization of light is described by specifying the orientation of the wave's electric field at a point in space over one period of the oscillation. When light travels in free space, in most cases it propagates as a transverse wave—the polarization is perpendicular to the wave's direction of travel. In this case, the electric field may be oriented in a single direction (linear polarization), or it may rotate as the wave travels (circular or elliptical polarization). In the latter case, the field may rotate in either direction. The direction in which the field rotates is the wave's chirality or handedness.
The polarization of an electromagnetic (EM) wave can be more complicated in certain cases. For instance, in a waveguide such as an optical fiber or for radially polarized beams in free space, the fields can have longitudinal as well as transverse components. Such EM waves are either TM or hybrid modes.
For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the direction of travel, so there is no polarization. In a solid medium, however, sound waves can be transverse. In this case, the polarization is associated with the direction of the shear stress in the plane perpendicular to the propagation direction. This is important in seismology.
Polarization is significant in areas of science and technology dealing with wave propagation, such as optics, seismology, telecommunications and radar science. The polarization of light can be measured with a polarimeter. A polarizer is a device that affects polarization.
Basics: plane waves
The simplest manifestation of polarization to visualize is that of a plane wave, which is a good approximation of most light waves (a plane wave is a wave with infinitely long and wide wavefronts). For plane waves Maxwell's equations, specifically Gauss's laws, impose the transversality requirement that the electric and magnetic field be perpendicular to the direction of propagation and to each other. Conventionally, when considering polarization, the electric field vector is described and the magnetic field is ignored since it is perpendicular to the electric field and proportional to it. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled x and y (with z indicating the direction of travel). For a simple harmonic wave, where the amplitude of the electric vector varies in a sinusoidal manner in time, the two components have exactly the same frequency. However, these components have two other defining characteristics that can differ. First, the two components may not have the same amplitude. Second, the two components may not have the same phase, that is they may not reach their maxima and minima at the same time. Mathematically, the electric field of a plane wave can be written as,
where and are the amplitudes of the x and y directions and is the relative phase between the two components.
Polarization state
The shape traced out in a fixed plane by the electric vector as such a plane wave passes over it (a Lissajous figure) is a description of the polarization state. The following figures show some examples of the evolution of the electric field vector (black), with time (the vertical axes), at a particular point in space, along with its x and y components (red/left and blue/right), and the path traced by the tip of the vector in the plane (yellow in figure 1&3, purple in figure 2): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation.
In the leftmost figure above, the two orthogonal (perpendicular) components are in phase. In this case the ratio of the strengths of the two components is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarization. The direction of this line depends on the relative amplitudes of the two components.
In the middle figure, the two orthogonal components have exactly the same amplitude and are exactly ninety degrees out of phase. In this case one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the x component can be ninety degrees ahead of the y component or it can be ninety degrees behind the y component. In this special case the electric vector traces out a circle in the plane, so this special case is called circular polarization. The direction the field rotates in depends on which of the two phase relationships exists. These cases are called right-hand circular polarization and left-hand circular polarization, depending on which way the electric vector rotates and the chosen convention.
Another case is when the two components are not in phase and either do not have the same amplitude or are not ninety degrees out of phase, though their phase offset and their amplitude ratio are constant. This kind of polarization is called elliptical polarization because the electric vector traces out an ellipse in the plane (the polarization ellipse). This is shown in the above figure on the right.
The "Cartesian" decomposition of the electric field into x and y components is, of course, arbitrary. Plane waves of any polarization can be described instead by combining any two orthogonally polarized waves, for instance waves of opposite circular polarization. The Cartesian polarization decomposition is natural when dealing with reflection from surfaces, birefringent materials, or synchrotron radiation. The circularly polarized modes are a more useful basis for the study of light propagation in stereoisomers.
Though this section discusses polarization for idealized plane waves, all the above is a very accurate description for most practical optical experiments which use TEM modes, including Gaussian optics.
Unpolarized light
Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be unpolarized. If there is partial correlation between the emitters, the light is partially polarized. If the polarization is consistent across the spectrum of the source, partially polarized light can be described as a superposition of a completely unpolarized component, and a completely polarized one. One may then describe the light in terms of the degree of polarization, and the parameters of the polarization ellipse.
||This section needs attention from an expert in Physics. (February 2009)|
For ease of visualization, polarization states are often specified in terms of the polarization ellipse, specifically its orientation and elongation. A common parameterization uses the orientation angle, ψ, the angle between the major semi-axis of the ellipse and the x-axis (also known as tilt angle or azimuth angle) and the ellipticity, ε, the major-to-minor-axis ratio (also known as the axial ratio). An ellipticity of zero or infinity corresponds to linear polarization and an ellipticity of 1 corresponds to circular polarization. The ellipticity angle, χ = arccot ε= arctan 1/ε, is also commonly used. An example is shown in the diagram to the right. An alternative to the ellipticity or ellipticity angle is the eccentricity, however unlike the azimuth angle and ellipticity angle, the latter has no obvious geometrical interpretation in terms of the Poincaré sphere (see below).
Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector):
Here and denote the amplitude of the wave in the two components of the electric field vector, while and represent the phases. The product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. The basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization.
Regardless of whether polarization ellipses are represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north.
S and P Polarization
Another coordinate system frequently used relates to the plane made by the propagation direction and a vector perpendicular to the plane of a reflecting surface. This is known as the plane of incidence. The component of the electric field parallel to this plane is termed p-like (parallel) and the component perpendicular to this plane is termed s-like (from senkrecht, German for perpendicular). Light with a p-like electric field is said to be p-polarized, pi-polarized, tangential plane polarized, or is said to be a transverse-magnetic (TM) wave. Light with an s-like electric field is s-polarized, also known as sigma-polarized or sagittal plane polarized, or it can be called a transverse-electric (TE) wave. However, there is no universal convention in this TE and TM naming scheme; some authors refer to light with p-like electric field as TE and light with s-like electric field as TM.. Traditionally, TE and TM are used to indicate whether the electric or the magnetic field is horizontal.
Parameterization of incoherent or partially polarized radiation
In the case of partially polarized radiation, the Jones vector varies in time and space in a way that differs from the constant rate of phase rotation of monochromatic, purely polarized waves. In this case, the wave field is likely stochastic, and only statistical information can be gathered about the variations and correlations between components of the electric field. This information is embodied in the coherency matrix:
where angular brackets denote averaging over many wave cycles. Several variants of the coherency matrix have been proposed: the Wiener coherency matrix and the spectral coherency matrix of Richard Barakat measure the coherence of a spectral decomposition of the signal, while the Wolf coherency matrix averages over all time/frequencies.
The coherency matrix contains all second order statistical information about the polarization. This matrix can be decomposed into the sum of two idempotent matrices, corresponding to the eigenvectors of the coherency matrix, each representing a polarization state that is orthogonal to the other. An alternative decomposition is into completely polarized (zero determinant) and unpolarized (scaled identity matrix) components. In either case, the operation of summing the components corresponds to the incoherent superposition of waves from the two components. The latter case gives rise to the concept of the "degree of polarization"; i.e., the fraction of the total intensity contributed by the completely polarized component.
The coherency matrix is not easy to visualize, and it is therefore common to describe incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. An alternative and mathematically convenient description is given by the Stokes parameters, introduced by George Gabriel Stokes in 1852. The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown in the equations and figure below.
Here Ip, 2ψ and 2χ are the spherical coordinates of the polarization state in the three-dimensional space of the last three Stokes parameters. Note the factors of two before ψ and χ corresponding respectively to the facts that any polarization ellipse is indistinguishable from one rotated by 180°, or one with the semi-axis lengths swapped accompanied by a 90° rotation. The Stokes parameters are sometimes denoted I, Q, U and V.
The Stokes parameters contain all of the information of the coherency matrix, and are related to it linearly by means of the identity matrix plus the three Pauli matrices:
Mathematically, the factor of two relating physical angles to their counterparts in Stokes space derives from the use of second-order moments and correlations, and incorporates the loss of information due to absolute phase invariance.
The figure above makes use of a convenient representation of the last three Stokes parameters as components in a three-dimensional vector space. This space is closely related to the Poincaré sphere, which is the spherical surface occupied by completely polarized states in the space of the vector
All four Stokes parameters can also be combined into the four-dimensional Stokes vector, which can be interpreted as four-vectors of Minkowski space. In this case, all physically realizable polarization states correspond to time-like, future-directed vectors.
Propagation, reflection and scattering
||This section needs attention from an expert in Physics. (February 2009)|
where k is the wavenumber and positive z is the direction of propagation. As noted above, the physical electric vector is the real part of the Jones vector. When electromagnetic waves interact with matter, their propagation is altered. If this depends on the polarization states of the waves, then their polarization may also be altered.
In many types of media, electromagnetic waves may be decomposed into two orthogonal components that encounter different propagation effects. A similar situation occurs in the signal processing paths of detection systems that record the electric field directly. Such effects are most easily characterized in the form of a complex 2×2 transformation matrix called the Jones matrix:
In general the Jones matrix of a medium depends on the frequency of the waves.
For propagation effects in two orthogonal modes, the Jones matrix can be written as
where g1 and g2 are complex numbers representing the change in amplitude and phase caused in each of the two propagation modes, and T is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors. For those media in which the amplitudes are unchanged but a differential phase delay occurs, the Jones matrix is unitary, while those affecting amplitude without phase have Hermitian Jones matrices. In fact, since any matrix may be written as the product of unitary and positive Hermitian matrices, any sequence of linear propagation effects, no matter how complex, can be written as the product of these two basic types of transformations.
Paths taken by vectors in the Poincaré sphere under birefringence. The propagation modes (rotation axes) are shown with red, blue, and yellow lines, the initial vectors by thick black lines, and the paths they take by colored ellipses (which represent circles in three dimensions).
Media in which the two modes accrue a differential delay are called birefringent. Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). An easily visualized example is one where the propagation modes are linear, and the incoming radiation is linearly polarized at a 45° angle to the modes. As the phase difference starts to appear, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) with an azimuth angle perpendicular to the original direction, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes (this is a consequence of the isomorphism of SU(2) with SO(3)). Examples for linear (blue), circular (red), and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarized images of whatever is viewed through them. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarization state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colors and rainbow-like effects.
Media in which the amplitude of waves propagating in one of the modes is reduced are called dichroic. Devices that block nearly all of the radiation in one mode are known as polarizing filters or simply "polarizers". In terms of the Stokes parameters, the total intensity is reduced while vectors in the Poincaré sphere are "dragged" towards the direction of the favored mode. Mathematically, under the treatment of the Stokes parameters as a Minkowski 4-vector, the transformation is a scaled Lorentz boost (due to the isomorphism of SL(2,C) and the restricted Lorentz group, SO(3,1)). Just as the Lorentz transformation preserves the proper time, the quantity det Ψ = S02 − S12 − S22 − S32 is invariant within a multiplicative scalar constant under Jones matrix transformations (dichroic and/or birefringent).
In birefringent and dichroic media, in addition to writing a Jones matrix for the net effect of passing through a particular path in a given medium, the evolution of the polarization state along that path can be characterized as the (matrix) product of an infinite series of infinitesimal steps, each operating on the state produced by all earlier matrices. In a uniform medium each step is the same, and one may write
where J is an overall (real) gain/loss factor. Here D is a traceless matrix such that αDe gives the derivative of e with respect to z. If D is Hermitian the effect is dichroism, while a unitary matrix models birefringence. The matrix D can be expressed as a linear combination of the Pauli matrices, where real coefficients give Hermitian matrices and imaginary coefficients give unitary matrices. The Jones matrix in each case may therefore be written with the convenient construction
where σ is a 3-vector composed of the Pauli matrices (used here as generators for the Lie group SL(2,C)) and n and m are real 3-vectors on the Poincaré sphere corresponding to one of the propagation modes of the medium. The effects in that space correspond to a Lorentz boost of velocity parameter 2β along the given direction, or a rotation of angle 2φ about the given axis. These transformations may also be written as biquaternions (quaternions with complex elements), where the elements are related to the Jones matrix in the same way that the Stokes parameters are related to the coherency matrix. They may then be applied in pre- and post-multiplication to the quaternion representation of the coherency matrix, with the usual exploitation of the quaternion exponential for performing rotations and boosts taking a form equivalent to the matrix exponential equations above. (See Quaternion rotation)
In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on angle of incidence and the angle of refraction. In addition, if the plane of the reflecting surface is not aligned with the plane of propagation of the wave, the polarization of the two parts is altered. In general, the Jones matrices of the reflection and transmission are real and diagonal, making the effect similar to that of a simple linear polarizer. For unpolarized light striking a surface at a certain optimum angle of incidence known as Brewster's angle, the reflected wave will be completely s-polarized.
Certain effects do not produce linear transformations of the Jones vector, and thus cannot be described with (constant) Jones matrices. For these cases it is usual instead to use a 4×4 matrix that acts upon the Stokes 4-vector. Such matrices were first used by Paul Soleillet in 1929, although they have come to be known as Mueller matrices. While every Jones matrix has a Mueller matrix, the reverse is not true. Mueller matrices are frequently used to study the effects of the scattering of waves from complex surfaces or ensembles of particles.
Examples and applications
In nature and photography
Light reflected by shiny transparent materials is partly or fully polarized, except when the light is perpendicular to the surface. It was through this effect that polarization was first discovered in 1808 by the mathematician Étienne-Louis Malus. A polarizing filter, such as a pair of polarizing sunglasses, can be used to observe this effect by rotating the filter while looking through it at the reflection off of a distant horizontal surface. At certain rotation angles, the reflected light will be reduced or eliminated. Polarizing filters remove light polarized at 90° to the filter's polarization axis. If two polarizers are placed atop one another at 90° angles to one another, there is minimal light transmission.
Polarization by scattering is observed as light passes through the atmosphere. The scattered light produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is easiest to observe at sunset, on the horizon at a 90° angle from the setting sun. Another easily observed effect is the drastic reduction in brightness of images of the sky and clouds reflected from horizontal surfaces (see Brewster's angle), which is the main reason polarizing filters are often used in sunglasses. Also frequently visible through polarizing sunglasses are rainbow-like patterns caused by color-dependent birefringent effects, for example in toughened glass (e.g., car windows) or items made from transparent plastics. The role played by polarization in the operation of liquid crystal displays (LCDs) is also frequently apparent to the wearer of polarizing sunglasses, which may reduce the contrast or even make the display unreadable.
The photograph on the right was taken through polarizing sunglasses and through the rear window of a car. Light from the sky is reflected by the windshield of the other car at an angle, making it mostly horizontally polarized. The rear window is made of tempered glass. Stress from heat treatment of the glass alters the polarization of light passing through it, like a wave plate. Without this effect, the sunglasses would block the horizontally polarized light reflected from the other car's window. The stress in the rear window, however, changes some of the horizontally polarized light into vertically polarized light that can pass through the glasses. As a result, the regular pattern of the heat treatment becomes visible.
Many animals are capable of perceiving some of the components of the polarization of light, e.g., linear horizontally polarized light. This is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. In the latter case, one species measures all six orthogonal components of polarization, and is believed to have optimal polarization vision. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization was thought to be perceived by pigeons, which was assumed to be one of their aids in homing, but research indicates this is a popular myth.
The naked human eye is weakly sensitive to polarization, without the need for intervening filters. Polarized light creates a very faint pattern near the center of the visual field, called Haidinger's brush. This pattern is very difficult to see, but with practice one can learn to detect polarized light with the naked eye.
The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See optical mineralogy for more details.
Polarization is principally of importance in chemistry due to circular dichroism and "optical rotation" (circular birefringence) exhibited by optically active (chiral) molecules. It may be measured using polarimetry.
The term "polarization" may also refer to the through-bond (inductive or resonant effect) or through-space influence of a nearby functional group on the electronic properties (e.g., dipole moment) of a covalent bond or atom. This concept is based on the formation of an electric dipole within a molecule, which is related to polarization of electromagnetic waves in infrared spectroscopy. Molecules will absorb infrared light if the frequency of the bond vibration is resonant with (identical to) the incident light frequency, where the molecular vibration at hand produces a change in the dipole moment of the molecule. In some nonlinear optical processes, the direction of an oscillating dipole will dictate the polarization of the emitted electromagnetic radiation, as in vibrational sum frequency generation spectroscopy or similar processes.
Polarized light does interact with anisotropic materials, which is the basis for birefringence. This is usually seen in crystalline materials and is especially useful in geology (see above). The polarized light is "double refracted", as the refractive index is different for horizontally and vertically polarized light in these materials. This is to say, the polarizability of anisotropic materials is not equivalent in all directions. This anisotropy causes changes in the polarization of the incident beam, and is easily observable using cross-polar microscopy or polarimetry. The optical rotation of chiral compounds (as opposed to achiral compounds that form anisotropic crystals), is derived from circular birefringence. Like linear birefringence described above, circular birefringence is the "double refraction" of circular polarized light.
In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe. Synchrotron radiation is inherently polarised. It has been suggested that astronomical sources caused the chirality of biological molecules on Earth.
3D movies
Polarization is also used for some 3D movies, in which the images intended for each eye are either projected from two different projectors with orthogonally oriented polarizing filters or, more typically, from a single projector with time multiplexed polarization (a fast alternating polarization device for successive frames). Polarized 3D glasses with suitable polarized filters ensure that each eye receives only the intended image. Historical stereoscopic projection displays used linear polarization encoding because it was inexpensive and offered good separation. Circular polarization makes left-eye/right-eye separation insensitive to the viewing orientation; circular polarization is used in typical 3-D movie exhibition today, such as the system from RealD. Polarized 3-D only works on screens that maintain polarization (such as silver screens); a normal projection screen would cause depolarization which would void the effect.
Communication and radar
All radio transmitting and receiving antennas are intrinsically polarized, special use[clarification needed] of which is made in radar. Most antennas radiate either horizontal, vertical, or circular polarization although elliptical polarization also exists. The electric field or E-plane determines the polarization or orientation of the radio wave. Vertical polarization is most often used when it is desired to radiate a radio signal in all directions such as widely distributed mobile units. AM and FM radio use vertical polarization, while television uses horizontal polarization. Alternating vertical and horizontal polarization is used on satellite communications (including television satellites), to allow the satellite to carry two separate transmissions on a given frequency, thus doubling the number of channels a customer can receive through one satellite. Electronically controlled birefringent devices such as photoelastic modulators are used in combination with polarizing filters as modulators in fiber optics.
Materials science
Sky polarization has been exploited in the "sky compass", which was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g., under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone") in their extensive expeditions across the North Atlantic in the 9th–11th centuries, before the arrival of the magnetic compass in Europe in the 12th century. Related to the sky compass is the "polar clock", invented by Charles Wheatstone in the late 19th century.
See also
Notes and references
- Principles of Optics, 7th edition, M. Born & E. Wolf, Cambridge University, 1999, ISBN 0-521-64222-1.
- Fundamentals of polarized light: a statistical optics approach, C. Brosseau, Wiley, 1998, ISBN 0-471-14302-2.
- Polarized Light, second edition, Dennis Goldstein, Marcel Dekker, 2003, ISBN 0-8247-4053-X
- Field Guide to Polarization, Edward Collett, SPIE Field Guides vol. FG05, SPIE, 2005, ISBN 0-8194-5868-6.
- Polarization Optics in Telecommunications, Jay N. Damask, Springer 2004, ISBN 0-387-22493-9.
- Optics, 4th edition, Eugene Hecht, Addison Wesley 2002, ISBN 0-8053-8566-5.
- Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University, 1985, ISBN 0-521-25862-6.
- Polarised Light in Science and Nature, D. Pye, Institute of Physics, 2001, ISBN 0-7503-0673-4.
- Polarized Light, Production and Use, William A. Shurcliff, Harvard University, 1962.
- Ellipsometry and Polarized Light, R. M. A. Azzam and N. M. Bashara, North-Holland, 1977, ISBN 0-444-87016-4
- Secrets of the Viking Navigators—How the Vikings used their amazing sunstones and other techniques to cross the open oceans, Leif Karlsen, One Earth Press, 2003.
- Dorn, R. and Quabis, S. and Leuchs, G. (dec 2003). "Sharper Focus for a Radially Polarized Light Beam". Physical Review Letters 91 (23,): 233901–+. Bibcode:2003PhRvL..91w3901D. doi:10.1103/PhysRevLett.91.233901.
- Subrahmanyan Chandrasekhar (1960) Radiative transfer, p.27
- M. A. Sletten and D. J. McLaughlin, "Radar polarimetry", in K. Chang (ed.), Encyclopedia of RF and Microwave Engineering, John Wiley & Sons, 2005, ISBN 978-0-471-27053-9, 5832 pp.
- Merrill Ivan Skolnik (1990) Radar Handbook, Fig. 6.52, sec. 6.60.
- Hamish Meikle (2001) Modern Radar Systems, eq. 5.83.
- T. Koryu Ishii (Editor), 1995, Handbook of Microwave Technology. Volume 2, Applications, p. 177.
- John Volakis (ed) 2007 Antenna Engineering Handbook, Fourth Edition, sec. 26.1. Note: in contrast with other authors, this source initially defines ellipticity reciprocally, as the minor-to-major-axis ratio, but then goes on to say that "Although [it] is less than unity, when expressing ellipticity in decibels, the minus sign is frequently omitted for convenience", which essentially reverts back to the definition adopted by other authors.
- Sonja Kleinlogel, Andrew White (2008). "The secret world of shrimps: polarisation vision at its best". PLoS ONE 3 (5): e2190. arXiv:0804.2162. Bibcode:2008PLoSO...3.2190K. doi:10.1371/journal.pone.0002190. PMC 2377063. PMID 18478095.
- "No evidence for polarization sensitivity in the pigeon electroretinogram", J. J. Vos Hzn, M. A. J. M. Coemans & J. F. W. Nuboer, The Journal of Experimental Biology, 1995.
- Hecht, Eugene (1998). Optics (3rd ed.). Reading, MA: Addison Wesley Longman. ISBN 0-19-510818-3.
- Clark, S. (1999). "Polarised starlight and the handedness of Life". American Scientist 97: 336–43. Bibcode:1999AmSci..87..336C. doi:10.1511/1999.4.336.
- Polarized Light in Nature and Technology
- Polarized Light Digital Image Gallery: Microscopic images made using polarization effects
- Polarization by the University of Colorado Physics 2000: Animated explanation of polarization
- MathPages: The relationship between photon spin and polarization
- A virtual polarization microscope
- Polarization angle in satellite dishes.
- Using polarizers in photography
- Molecular Expressions: Science, Optics and You — Polarization of Light: Interactive Java tutorial
- Electromagnetic waves and circular dichroism: an animated tutorial
- HyperPhysics: Polarization concepts
- Tutorial on rotating polarization through waveplates (retarders)
- SPIE technical group on polarization
- A Java simulation on using polarizers
- Antenna Polarization
- Animations of Linear, Circular and Elliptical Polarizations on YouTube | http://en.wikipedia.org/wiki/Polarized_light | 13 |
51 | Labs for The Most Complex Machine
xLogicCircuits Lab 1: Logic Circuits
IT IS POSSIBLE IN THEORY to construct a computer entirely out of transistors (although in practice, other types of basic components are also used). Of course, in the process of assembling a computer, individual transistors are first assembled into relatively simple circuits, which are then assembled into more complex circuits, and so on. The first step in this process is to build logic gates, which are circuits that compute basic logical operations such as AND, OR, and NOT. In fact, once AND, OR, and NOT gates are available, a computer could be assembled entirely from such gates. In this lab you will work with simulated circuits made up of AND, OR and NOT gates. You will be able to build such circuits and see how they operate. And you will see how simpler circuits can be combined to produce more complex circuits.
This lab covers some of the same material as Chapter 2 in The Most Complex Machine. The lab is self-contained, but many of the ideas covered here are covered in more depth in the text, and it would be useful for you to read Chapter 2 before doing the lab.
This lab includes the following sections:
- Logic and Circuits
- Building Circuits
- Complex Circuits and Subcircuits
- Circuits and Arithmetic
The lab uses an applet called "xLogicCircuits." Start the lab by clicking this button to launch the xLogicCircuits applet in its own window:
(For a full list of labs and applets, see the index page.)
Logic and Circuits
A logic gate is a simple circuit with one or two inputs and one output. The inputs and outputs can be either ON or OFF, and the value of a gate's output is completely determined by the values of its inputs (with the proviso that when one of the inputs is changed, it takes some small amount of time for the output to change in response). Each gate does a simple computation. Circuits that do complex computations can be built by connecting outputs of some gates to inputs of others. In fact, an entire computer can be built in this way.
In the xLogicCircuits applet, circuits are constructed from AND gates, OR gates, and NOT gates. Each type of gate has a different rule for computing its output value. Circuits are laid out on a circuit board. Besides gates, the circuit board can contain Inputs, Outputs, and Tacks. Later, we'll see that circuits can also contain other circuits. All these components can be interconnected by wires. To the left of the circuit board in the applet is a pallette. The pallette contains components available to be used on the circuit board. You can't usually see all the components at once, but there is a scroll bar that allows you to scroll through all the components on the pallette. The following illustration shows the part of the pallette that contains the six standard components, along with some comments and a small sample circuit:
(One thing you should note: Wires cannot connect to each other except at Tacks. Just because two wires cross each other on the circuit board does not mean that they are connected. That is, no signal will propagate from one of the wires to another. Wires can only carry signals between components such as gates, Tacks, Inputs, and Outputs.)
The applet that you launched above should start up showing a sample circuit called "Basic Gates." At the top of the circuit board are an AND gate, an OR gate, and a NOT gate. The gates are connected to some Inputs and Outputs. A more complicated circuit built from several gates occupies the bottom of the circuit board.
To see how the circuit works, you have to turn on the power. Power to the circuit board is turned on and off using the "Power" checkbox below the circuit board. The power is ON when the box is checked. Click on the Power checkbox now to turn on the power. (Why does the wire leading from the NOT gate come on when you do this?) When the power is on, you have control over the Inputs on the circuit board: you can turn an input ON and OFF by clicking on it. The circuit does the rest: signals from the Inputs propagate along wires, through gates and other components, and to the Outputs of the circuit. Try it with the sample circuit. If you have a problem, make sure the power is on and that you are clicking on an Input, not an Output.
You should check that the AND, OR, and NOT gates at the top of the circuit board have the expected behavior when you turn their inputs ON and OFF. You can also investigate the circuit in the bottom half of the logic board. Below the circuit board, to the left of the Power switch, you'll find a pop-up menu that can be used to control the speed at which signals propagate through the circuit. The speed is ordinarily set to "Fast." You can use the pop-up menu to change the speed to "Moderate" or "Slow" if you want to watch the circuit in slow motion. (For the most part, though, you probably want to leave the speed set to Fast.)
Logic gates and logic circuits are associated with mathematical logic, which is the study of the computations that can be done with the logical values true and false and with the logical operators and, or, and not. This association comes about when we think of ON as representing true and OFF as representing false. In that case, AND, OR, and NOT gates do the same computations as the operators and, or, and not.
Mathematical logic uses Boolean algebra, in which the letters A, B, C, and so on, are used to represent logical values. Letters are combined using the logical operators and, or, and not. For example,
(A and C) or (B and (not C))
is an expression of Boolean algebra. As soon as the letters in an expression are assigned values true or false, the value of the entire expression can be computed.
Every expression of boolean algebra corresponds to a logic circuit. The letters used in the expression are represented by the Inputs to the circuit. Each wire in the circuit represents some part of the expression. A gate takes the values from its input wires and combines them with the appropriate word -- and, or, or not -- to produce the label on its output wire. The final output of the whole circuit represents the expression as a whole. For example, consider the sample circuit from the applet. If the inputs are labeled A and B, then the wires in the circuit can be labeled as follows:
The circuit as a whole corresponds to the final output expression, (A and (not B)) or (B and (not A)). This expression in turn serves as a blueprint for the circuit. You can use it as a guide for building the circuit. The expression given earlier, (A and C) or (B and (not C)), corresponds to another sample circuit shown in the illustration above -- provided you label the inputs appropriately.
To sum up, given any expression of Boolean algebra, a circuit can be built to compute that expression. Conversely, any output of a logic circuit that does not contain a "feedback loop" can be described by a Boolean algebra expression. This is a powerful association that is useful in understanding and designing logic circuits. (Note: Feedback occurs when the output of a gate is connected through one or more other components back to an input of the same gate. Circuits with feedback are not covered in this lab. However, they have important uses that are covered in the next lab.)
You can build your own circuits in the xLogicCircuits applet. Click on the "Iconify" button at the bottom of the applet. This will put away the "Basic Gates" circuit, by turning it into an icon on the pallette. You'll have a clear circuit board to work on. As an exercise, try to make a copy of the sample circuit shown above, which corresponds to the Boolean expression
(A and C) or (B and (not C)).
To add a component to your circuit, click on the component in the pallette, hold down the mouse button, and use the mouse to drag the component onto the circuit board. Make sure you drag it completely onto the board. If you want a gate that is facing in a different direction, you have to rotate the gate in the pallette before you drag it onto the circuit board.
Once some components are on the board, you can draw wires between them using the mouse. Every wire goes from a source to a destination. To draw a wire, move the mouse over the source, click and hold the mouse button, move the mouse to the destination, and release the button. You must draw the wire from source to destination, not the reverse. If you release the mouse button when the wire is not over a legal destination, no wire will be drawn. When there are two possible destinations in one component -- such as the two inputs of an AND or OR gate -- make sure that you get the wire connected to the right one.
Circuit Inputs are valid sources for wires. So are Tacks. So are the outputs of gates. Valid destinations include circuit Outputs, inputs of gates, and Tacks. You can draw as many wires as you want from a source, but you can only draw one wire to a destination. (This makes sense because when the circuit is running, a destination takes its value from the single wire that leads to it. On the other hand, the value of a source can be sent to any number of wires that lead from it.)
Once a component is on the board, you can still move it to a new position, but you have to drag it using the right mouse button. Alternatively -- if you have a one-button mouse, for example -- you can drag a component by holding down the control key as you first press the mouse button on it.
You can delete components and wires that you've added by mistake. Just click on the component or wire to hilite it. Then click on the "Delete" button at the bottom of the applet. The hilited item will be deleted from the circuit board. If you delete a component that has wires attached, the attached wires will also be deleted along with the component.
If you delete an item or modify the circuit in some other way, you get one chance to change your mind. You can click on the "Undo" button to undo one operation. Only the most recent operation can be undone in this way.
There is one shortcut that you might find useful, if you like using Tacks. You can insert a Tack into an existing wire by double-clicking on the wire. If you double-click and hold the mouse down on the second click, you can drag the tack to a different position. (However, some browsers might not support double-clicks.)
After you build the practice circuit, you can clear the screen, since you won't need that circuit again in the rest of the lab. However, you'll get more practice building circuits in the Exercises at the end of the lab.
Complex Circuits and Subcircuits
In order to have circuits that display structured complexity, it is important to be able to build on previous work when designing new circuits. Once a circuit has been designed and saved, it should be possible to use that circuit as a component in a more complex circuit. A lot of the power of xLogicCircuits comes from the ability to use circuits as components in other circuits. Circuits used in this way are called subcircuits. A circuit that has been saved as an icon in the pallette can simply be dragged into another circuit. (More exactly, a copy of the circuit is created and is added to the circuit board. The copy is a separate circuit; editing the original will not change the copy.) This ability to build on previous work is essential for creating complex circuits.
You can open a circuit from the pallette to see what's inside or to edit it. Just click on the icon to hilite it, and then click on the "Enlarge" button. The icon will be removed from the pallette and the circuit will appear on the circuit board. At the same time, any circuit that was previously on the circuit board will be iconified and placed on the pallette. You should also be able to enlarge a circuit just by double-clicking on it. (By the way, you can change the name of the circuit on the circuit board by editing the text-input box at the top of the applet. This box contains the name that appears on the iconified circuit.)
The xLogicCircuits applet should have loaded several subcircuits for the pallette. One of these circuits is called "Two or More". Open this circut now. The circuit has three inputs. It turns its output ON whenever at least two of its inputs are ON. Try it. (Click on the inputs to turn them ON and OFF -- and don't forget to turn the Power on first.)
As a simple exercise in building circuits from subcircuits, use the "Two or More" circuit as part of a "At Most One" circuit. You want to build a circuit with three inputs that will turn on its output whenever zero or one of its inputs is on. Notice that this is just the opposite behavior from the "Two or More" circuit. That is, "At Most One" is ON whenever "Two or More" is not ON. This "logical" description shows that the "At Most One" circuit can be built from a NOT gate and a copy of the "Two or More" circuit. Begin by re-Iconifying the "Two or More" circuit, then drag a NOT gate and a copy of "Two or More" onto the empty circuit board. Add Inputs, Outputs, and wires as appropriate, then test your circuit to make sure that it works. If you like, you can give it a name and turn it into an icon.
Next, open the "4-Bit Adder" sample circuit. You'll see that it contains several copies of a subcircuit called "Adder." It's possible to look inside one of these circuits: Just click on the adder circuit to hilite it, and then click the "Enlarge" button. This does not remove the main circuit from the board -- it just lets you see an enlarged part of it. When you shrink the subcircuit back down to its original size, the main circuit is still there. In this case, you'll see that an "Adder" circuit contains two "Half Adder" subcircuits, which you can enlarge in their turn, if you want.
Circuits and Arithmetic
The "4-Bit Adder" circuit is an example of a logic circuit that can work with binary numbers. Circuits can work with binary numbers as soon as you think of ON as representing the binary value 1 (one) and OFF as representing the value 0 (zero). The "4-Bit Adder" can add two 4-bit binary numbers to give a five digit result. Here are some examples of adding 4-bit binary numbers:1011 1111 1111 1010 0111 0001 0110 0001 1111 0101 1010 0011 ----- ----- ----- ----- ----- ----- 10001 10000 11110 01111 10001 00100
The answer has 5 bits because there can be a carry from the left-most column. Each of the four "Adder" circuits in the "4-Bit Adder" handles one of the columns in the sum. You should test the "4-Bit Adder" to see that it gets the right answers for the above sums. The two four-bit numbers that are to be added are put on the eight Inputs at the top of "4-Bit Adder". The sum appears on the outputs at the bottom, with the fifth bit -- the final carry -- appearing on the output on the right. You should observe that it takes some time after you set the inputs for the circuits to perform its computations.
Exercise 1: One of the examples in this lab was the circuit corresponding to the expression
(A and (not B)) or (B and (not A)).
This circuit is ON if exactly one of its inputs is on. Another way to describe the output is to say that it is ON if "one or the other of the inputs is on, but not both of the inputs are on." This description corresponds to the Boolean expression
(A or B) and (not ((A and B)).
Build a circuit corresponding to the second expression, and check that it gives the same output as the first circuit for every possible combination of inputs.
Exercise 2: When you checked "every possible combination of inputs" for the circuit in Exercise 1, how many combinations did you have to check? If you wanted to check that the "Two or More" example circuit works correctly for every possible combination of inputs, how many combinations would you have to check? Why? If you wanted to check that the "4-Bit Adder" gives the correct answer for each possible set of inputs, how many inputs are there to check? Why?
Exercise 3:Consider the following three Boolean algebra expressions:
(A and B and C) or (not B)
(not ((not A) and (not B)))
(not (A or B)) or (A and B)
For each expression, build a logic circuit that computes the value of that expression. Write a paragraph that explains the method that you apply when you build circuits from expressions. (One note: To build a circuit for an expression of the form (X and Y and Z), you should insert some extra parentheses, which don't change the answer. Think of the expression as ((X and Y) and Z), and build the circuit using two AND gates.)
Exercise 4: Given a logic circuit that does not contain any feedback loops, it is possible to find a Boolean algebra expression that describes each output of that circuit. Open the circuit called "For Ex. 4", which was one of the sample circuits in the applet's pallette. This circuit has four inputs and three outputs. Assuming that the inputs are called A, B, C, and D, find the expression that corresponds to each of the three outputs. Also write a paragraph that discusses the procedure that you apply to find the Boolean expression for the output of a circuit.
Exercise 5: Consider the following input/output table for a circuit with two inputs and one output. The table gives the desired output of the circuit for each possible combination of inputs.
Input 1 Input 2 Output ON ON ON ON OFF ON OFF ON OFF OFF OFF ON
Construct a circuit that displays the specified behavior. You have to build one circuit that satisfies all four rows of the table. Section 2.1 of The Most Complex Machines gives a general method for constructing a circuit specified by an input/output table. You can apply that method, or you can just try to reason logically about what the table says. Write a paragraph discussing how you found your circuit.
Exercise 6: One of the examples in this lab was a circuit called "Two or More", which checks whether at least two of its inputs are on. Consider the problem of finding a similar circuit with four inputs. The output should be on if any two (or more) of the inputs are on. A circuit that does this can be described by the Boolean expression:
(A and (B or C or D)) or (B and (C or D)) or (C and D)
Use this expression to construct a "Two or More" circuit with four inputs. Try to understand where this expression comes from. Why does it make sense? (Hint: Think of two cases, one case where the input A is ON, and the other case where the input A is OFF.) Write a paragraph explaining this. The form of this expression can be extended to handle circuits with any number of inputs. Write down a logical expression that describes a circuit with five inputs that turns on its output whenever two or more of the inputs are on.
Exercise 7: "The structure of the 4-Bit Adder circuit reflects the structure of the compution it is designed to perform." In what sense is this true? What does it mean? How does this relate to problem-solving in general?
Exercise 8: Write a short essay (of several paragraphs) that explains how subcircuits are used in the construction of complex circuits and why the ability to make and use subcircuits in this way is so important.
Exercise 9: Build a "Select" circuit, as shown in this illustration:
The circuit has two inputs, A and B, at the top. It also has two inputs, C and D, on the left, which serve as control wires. (The only thing that makes an input of a circuit a control wire is that the designer of the circuit says it is, but in general control wires are thought of as controlling the circuit in some way.) The control wires determine which of the inputs, A or B, gets to the output. In order to do this exercise, you should think "logically." That is, try to describe the output of the circuit using a Boolean expression involving A, B, C, and D. Then use that expression as a blueprint for the circuit. Test your circuit and save it for use in Exercise 10.
Exercise 10: For this exercise, you should build a "Mini ALU" that can do either addition or subtraction of four-bit binary numbers. An Arithmetic Logic Unit, or ALU, is the part of a computer that does the basic arithmetic and logical computations. It takes two binary numbers and computes some output. The interesting thing is that an ALU can perform several different operations. It has control wires to tell it which operation to perform. You will build an ALU that can perform either addition or subtraction of four-bit binary numbers. It has two control wires. Turning on one of these will make it do an addition; turning on the other will make it do a subtraction. You should construct the circuit as specified by this illustration:
Now, an interesting thing about an ALU is that it actually performs all the computations that it knows how to do. The control wires just control which of the answers get to the outputs of the ALU. To make your "Mini ALU," you can start with the "4-Bit Adder" and "4-Bit Minus" circuits, which were provided to you in the applet's palette. (The four-bit subtraction circuit has only four outputs, since for subtraction the carry bit from the leftmost adder does not provide any useful information. You don't have to worry about how the Minus circuit works -- you don't even have to understand how negative numbers are represented in binary.)
Start by placing a "4-Bit Adder" and a "4-Bit Minus" circuit on an empty circuit board, along with the eight inputs at the top of the circuit. These can be connected as shown:
(Note: To change the size and shape of a subcircuit, click the circuit to hilite it. When a circuit is hilited, it is surrounded by a rectangle with a little square handle in each corner. You can click-and-drag one of these handles to adjust the size of the circuit.)
All you have to do is construct the rest of the circuit so that the control wires can control whether the answer from the "4-Bit Adder" or the answer from the "4-Bit Minus" gets through to the Outputs of the ALU. One way to do this is to use four copies of the "Select" circuit that you built for Exercise 9.
This is one of a series of labs written to be used with The Most Complex Machine: A Survey of Computers and Computing, an introductory computer science textbook by David Eck. For the most part, the labs are also useful on their own, and they can be freely used and distributed for private, non-commercial purposes. However, they should not be used as a formal part of a course unless The Most Complex Machine is also adopted for use in that course.
--David Eck ([email protected]), Summer 1997 | http://math.hws.edu/TMCM/java/labs/xLogicCircuitsLab1.html | 13 |
86 | Science (Human Eye & Colourful World) SS2
Q. 1. What is the speed of light in vacuum?
Q. 2. Which image can be obtained on the screen?
Q. 3. What sign convection has been given to the focal length of (1) a concave mirror (2) a convex mirror?
Q. 4. What is the significance of +v sign of magnification?
Q. 5. If the radius of curvature of a concave mirror is 20cm, what is its focal length?
Q. 6. Define angle of refraction?
Q. 7. Define one dioptre?
Q. 8. What is lens formula?
Q. 9. Write down the magnification formula for a lens in term of object distance and image distance. How does it differ from the corresponding formula for a mirror?
Q. 10. Define refractive index?
Q. 11. State two effects caused by the refraction of light?
Q. 12. Give the laws of reflection?
Q. 13. Give an example of each type of image?
Q. 14. For magnification m=+1 for a plane mirror, what does this signify for: m=1 and +ve sign for m
Q. 15. Describe the nature of the image formed when the object is placed at a distance of 20cm from a concave mirror of focal length 10cm?
Q. 16. Define reflection, incident light, reflected light, angle of incidence and angle of reflection?
Q. 17. Define the centre of curvature, radius of curvature, pole, principle axis of a spherical mirror and focus?
Q. 18. Why does a concave mirror have a real principal focus?
Q. 19. Draw a labelled diagram showing how a plane mirror forms an image. Also write the characteristics of the image?
Q. 20. Explain lateral inversion?
Q. 21. Give the various rules for obtained images formed by a concave mirror?
Q. 22. Explain the cartesian sign convection for a spherical mirror?
Q. 23. An object 1cm high is placed on the axis and 15cm from a concave mirror of focal length 10cm. Find the position, nature, magnification and size of the image?
Q. 24. An object 5cm is length is placed at a distance of 20cm in from of a convex mirror or radius of curvature 30cm. Find the position of the image, its nature and size?
Q. 25. An object placed 20 cm in front of a mirror is found to have an image 15cm (a) in front of it, (b) behind it. Find the focal length of the mirror and the kind of mirror in each case?
Q. 26. Show by ray diagram the formation of an image by a convex mirror?]
Q. 27. What do you mean by colour mixing by subtraction?
Q. 28. What will happen to a ray of light when it falls normally on a surface?
Q. 29. What will happen to a ray of light when it travels from a denser medium to a rarer medium?
Q. 30. What is power of a lens?
Q. 31. Why do planets not twinkle?
Q. 32. What determines the colour of an object in daylight?
Q. 33. How is the image formed by a convex lens?
Q. 34. Explain total internal reflection?
Q. 35. A concave lens of focal length 15cm forms an image 10cm from lens. How far is the object be placed from the lens? Draw the ray diagram.
Science (Heridity & Evolution)SS2
Q. 1. Describe the structure of chromosones?
Q. 2. Give the functions of chromosomes?
Q. 3. How do mendel’s experiments show that traits may be dominant recessive?
Q. 4. What are the different ways in which individuals with a particular trait may increase in population?
Q. 5. Why are traits acquired during the time of an individual not inherited?
Q. 6. How do mendel’s experiment show that trait are inherited independently?
Q. 7. How is sex determined in human beings?
Q. 8. How does the creation of variations in species promote?
Q. 9. Why are the small number of surviving tigers a cause of worry from the point of view of genetics?
Q. 10. How is the equal genetic contribution of male and female parents ensured in the progeny?
Science (How do Organisms Reproduce)
Q. 1. Define reproductions?
Q. 2. What is the importance of dna copying in reproduction?
Q. 3. Where is the male gamete formed in a flowering plant?
Q. 4. How many male gametes are present in one pollel-grain?
Q. 5. What is puberty?
Q. 6. What is gestation?
Q. 7. What is fission?
Q. 8. Where the anther contains sepal, ovules and pollel-grains?
Q. 9. Define vegitative propagation. Name its three methods of vegetative propagation used by the gardners to grow plants?
Q. 10. What are the advantages of vegetative propagation?
Q. 11. Why is dna copying an essential part of the process of reproduction?
Q. 12. How is the process of pollination different from fertilization?
Q. 13. What are changes seen in the girls at the time of puberty?
Q. 14. What are the different methods of contraception?
Q. 15. Distinguish between asexual reproduction and sexual reproduction?
Q. 16. Describe the menstrual cycle in female?
Q. 17. Why is variation beneficial to the species but not necessarily for the individual?
Q. 18. How does the embroy get nourishment inside the mother’s body?
Q. 19. What are the function performed by the testis in human beings?
Science (Control & coordination)
Q. 1. Why does living organisms show movement?
Q. 2. Name the organs of our peripheral nervous system?
Q. 3. What happens at the synapse between two neurons?
Q. 4. What are nastic movements?
Q. 5. Give an example of plant hormone that promotest growth?
Q. 6. Which signals will get disturbed in case of a spinal cord injury?
Q. 7. What is the role of brain in reflex action?
Q. 8. Why are some patients of diabetes treated by giving injections of insulin hormone?
Q. 9. What is the need for a system of control and co-ordination in an organism?
Q. 10. Differentitate between endocrine gland and dexocrine gland?
Q. 11. Write some characteristics of hormones of animals?
Q. 12. Compare and contrast nervous and hormonal mechanisms for control and co-ordination in the animals?
Q. 13. Give various functions performed by plant hormones?
Q. 14. What is reflex action and reflex arc? Explain with the help of examples:
Q. 15. Give the various functions of brain?
Q. 16. What is the difference between reflex action and walking?
Science (Human Eye & Colourful World)SS2
Q. 1. What is the least distance of distinct vision of a normal human eyes?
Q. 2. Define power of accommodation?
Q. 3. What is the type of lens used for correcting myopaia (short sightedness)?
Q. 4. What is cataract?
Q. 5. What do you mean by angle or deviation in prisim?
Q. 6. What do you mean by scattering light or tyndall effect?
Q. 7. Why does the sky appear dark instead of blue to an astronaut.
Q. 8. What is presbyopia?
Q. 9. How do us see colours?
Q. 10. What is meant by the far point, near point and the least distance distinct vision?
Q. 11. Why does it take some time to see to see object in a dim room when you enter the room from bright sunlight outside?
Q. 12. A person with a defective eye-vision is unable to see the object nearer than 1.5m. He wants to read books at distance of 30 cm. Find the nature, focal length and power of the lens he needs in his spectacles?
Q. 13. Why do the planets not appear twinkling like the stars?
Q. 14. Why does the colour of sky appear blue?
Q. 15. Why do the colour of sun appears red at sun-rise and sun-set?
Q. 16. What is meant by saying that “potential difference between two points is one v (volt)”?
Q. 17. Why are copper and aluminium wires usually used for electricity transmission?
Q. 18. Why is an ammeter likely to be burnt out if you connect in parallel?
Q. 19.why is the series arrangement not found satisfactory for house lights?
(B)why is the resistance of a given wire inversely proportional to its cross sectional area?
Q. 20. A 40 watt lamp requires 0.182a of current at 220 volts, while a 6o watt lamp requires 0.272a of current at 220 volts line, how many amperes of current will flow through each lamp?
Q. 21. Derive the formula for the heat generated by an electric current?
Q. 22. Define and explain ohm’s law?
Q. 23. Define electric power. Write s.i unit of electric power. Define it also. Derive the formula for electric power.
Q. 24. Why are coils of electeric toasters and electric irons made of an alloy rather than a pure metal?
Science (Physics) SS2
Q. 1. If a ray of light is incident on a plane mirror at an angle I, by what angle is it deviated?
Q. 2. The refractive indices of glass of yellow, green and red color are uu, uq and ur. Rearrange these symbols in the increasing order of their values.
Q. 3. A girl in the mirror laughing house finds her face appearing highly magnified, lower portion of her body of the same size but laterally inverted and middle portion f her body highly diminished in size. Guess the design of the mirror?
Q. 4. How does a focal length of a convex lens changes if
A monochromatic red light is used instead of blue light?
It is placed in water?
Q. 5. Is it possible for a lens to act as a converging lens in one medium and a diverging lens in other? How?
Q. 6. An object is kept in front of a concave lens of focal length 15 cm. The image formed is three times the size of the object. Calculate two possible distances of the object from the mirror?
Q. 7. An object is placed at a distance of one and a half of the focal length of a concave mirror. Find the position of the image in terms of f. Also find the magnification and nature of the image formed by the mirror.
Q. 8. A concave mirror produces three times magnified real image of an object, placed 10cm from it. Where is the image located?
Q. 9. The image of a needle placed at 45 cm from a lens is formed on a screen placed at 90 cm on the other side of the lens. Find the displacement of the image when the object is moved 5cm from the lens.
Q. 10. Where should an object be placed from a convex lens of f=20cm so as to obtain an Image of magnification 2.
Q. 1. Draw the ray diagram for an object placed at 2F in
front of convex mirror.
Q. 2. Explain the different types of energy that can be obtained from
Q. 3. What is the principle used in motors and discuss the role of split rings in electric motors.
Q. 4. Magnification of object is –1 . What kind of spherical mirror is Used and what is position of the object.
Q. 5. Hydrogen has been used as rocket fuel. Is it cleaner than CNG? why or why not.
Q. 6. Explain the construction ,working ,advantages and disadvantages of using solar cooker with diagram.
Q. 7. How has the traditional use of wind and water energy been modified for our convenience.
Q. 8. Find the equivalent resistance between A and B and find current In 5 ohm resistance
Q. 9. Why dispersion of light is found in case of prism and not in glass Slab. From the diagram give the number that corresponds to color Of danger signal. Why only that color is used as danger signal.
Q. 10. A convex lens has focal length of 50 cm. What is its power ? An object is kept at distance of 75 cm from this lens. Where Is the image formed ? How does size of image compare with that of object ?
Q. 11. Explain house hold wiring system with diagram. Explain the terms (a)over loading
(b) short circuiting (c) earthing (d)fuse
Q. 1. explain double fertilization in plants?
Q. 2. what is the role of a catalyst in a chemical reaction?
Q. 3. what are isomers?
Q. 4. what is meant by 1 ohm?
Q. 5. write the chemical name and formula for plaster of paris?
Q. 6. What is meant by monohybrid and dihybrid cross?
Q. 7. state and explain ohm’s law.
Q. 8. state one limitation for the usage of hydro energy?
Q. 9. what is principle of electric motor?
Q. 10. why is DNA copying essential for reproduction?
Q. 11. What is meant by specition?
Q. 12. what is main function of nephron?
Q. 13. define rusting and rancidty?
Q. 14. state the functions of small intestine?
Q. 15. which class of compounds gives a positive fehling’s test?
Science (Periodic Classification of Elements)SS2
Q. 1. Name the scientist who tried to classify the elements in the group of triads.
Q. 2. Define modern periodic law.
Q. 3. How many groups and periods are present in modern periodic table?
Q. 4. Define atomic size?
Q. 5. Name two elements you would expect to show chemical reactions similar to magnesium. What is the basic of your choice?
Q. 6. Nitrogen ‘n’ atomic number y and phosphorus (p) atomic number 15 belong to group 15 of the periodic table. Write the electronic configuration of these elements. Which of these two will be more electronegative? Why?
Q. 7. Write the triads as formed by dobereiner?
Q. 8. Define and explain mendeleev’s periodic law?
Q. 9. Explain the limitation or demerits of mendeleev’s periodic table?
Q. 10. Define and explain modern periodic law?
Q. 11. Write the number of elements present in k, l and m shell periods of modern periodic table and give reason for it
Q. 12. An atom has electronic configuration 2, 8, 7What is the atomic number of this element?
To which of the following elements would it be chemically similar?
Q. 13. How does the electronic configuration of an atom relate to its position the modern periodic table?
Q. 14. Explain about the trends in modern periodic table, about various properties like valancy, atomic size, metallic and non-metallic properties of the atoms of element?
Q. 15. What were the limitation of dobereiner’s classification?
Q. 16.lithium sodium, potassium are all metals that eat with water to liberate hydrogen gas. Is there any similarity in the atoms of these elements?
Helium is an unreactive gas and neon is a gas of extremely low? Why?
Q. 17. Nitrogen (atomic number 7) and phosphorus (atomic number 15) belong to group 15 of the periodic table. Write the electronic configuration of these two elements. Which of these will be more electronegative? Why?
Q. 18. In the modern periodic table, calcium (atomic number 20) is surrounded by elements with atomic numbers 12, 19, 21 and 38. Which of these have physical and chemical properties resembling calcium?
Science (Carbon and its Compound)
Q. 1. What is a covalent bond?
Q. 2. Why are carbon compounds not able to conduct electricity through them?
Q. 3. What are fullerenes?
Q. 4. What are saturated hydro-carbons?
Q. 5. What are unsaturated hydro-carbons?
Q. 6. What are structural isomers?
Q. 7. Draw the structural fomulae of two isomers of butane?
Q. 8. What are alkanes?
Q. 9. What are alkenes?
Q. 10. What is homologous series?
Q. 11. Write the general scientific names of alcohols and carboxylic acids.
Q. 12. What are detergents?
Q. 13. What is hydrogenation?
Q. 14. Why carbon atoms can not form ionic bounds in the compounds?
Q. 15. Draw the structures of diamond and graphite?
Q. 16. What type of compound show addition reaction? Explain with an example.
Q. 17. Explain the cleaning action of soaps?
Q. 18. Differentiate between soap and detergents.
Q. 19. write important properties of ethanoic acid.
Q. 20. A mixture of oxyzen and ethyne is burnt for welding. Can you tell why a mixture of ethyne and air is not used?
Q. 21. What are oxidizing agents?
Science (Carbon and its compounds) SS2
Q. 1. Name the functional group present in the compound CH3NH2 and name it. (1 mark)
Q. 2. Write the name and chemical formula of the organic acid present in vinegar. (1 mark)
Q. 3. A compound has molecular formula C2H6O. It is usable as a fuel. Name it. (1 mark)
Q. 4. An organic compound A on heating with acetic acid and conc. Sulphuric acid gave a pleasant smelling compound B. What is the nature of compound A? (1 mark)
Q. 5. How is ethanol prepared from ethane? Write the chemical equation for the reaction involved. What is meant by denatured alcohol? What is the need to denature alcohol? (2 marks)
Q. 6. What is the unique property of carbon atom? How is this property helpful to us? (2 marks)
Q. 7. Complete and balance the following equations: (2 marks)
C2H5OH + O2Combustion
C2H4 + H2OH3 PO 4 CH3COOH + CH3OH
Q. 8. An organic compound A of molecular formula C2H6O on oxidation gives an acid with the same number of carbon atoms as in the molecule A. Compound A is often used for sterilization of skin by doctors. Name the compound A and B. Write the chemical equation involved in the formation of B from A. (2 marks)
Q. 9. Give reasons: (2 marks)
Common salt (sodium chloride) is added during the preparation of soap., Soap is not suitable for washing clothes when the water is hard.
Q. 10. Explain the nature of the covalent bond using the bond formation is CH3Cl. (3 marks)
Q. 11. Write the chemical equation representing the preparation reaction of ethanol from ethane.
Name the product obtained when ethanol is oxidized by either chromic anhydride or alkaline potassium permanganate. Give an example of an etherification reaction. (3 marks)
Q. 12.What are the two properties of carbon which lead to he formation of a large number of carbon compounds?What is a homologous series? Explain with an example. By how many carbon atoms and hydrogen atoms do any two adjacent homologous differ?Give a test that can be used to differentiate chemically between butter and cooking oil. (5 marks)
Science (Chemistry) SS2
Q. 1. What are Newland octaves. Give its drawbacks.
Q. 2. What are differences between Mendleev’s and modern periodic tables?
Q. 3. How many elements are there in 1st period?
Q. 4. Name two elements which shows chemical charecteritics similar to mg?
Q. 5. What are defects of Mendleev’s periodic table?
Q. 6. On which factors the size of atoms depend? Explain?
Q. 7. How the following changes in period?Metallic charecterics ,valenc ,Electron affinity,Atomic radii
Q. 8. An element A with atomic no.14 and another B of atomic no. 17. Which one has :more metallic charactergreater atomic size .name one element having similar characteristics as element A and one as B.
Q. 9. Which among Cl, F,Br.and Iodine has largest atomic sizehighest reactivity
Q. 10. Why noble gasses are inert?
Q. 11. Scandium,Gallium and germanium are discovered later but Mendeleev,s left gaps for them.What name he gave these elements
Q. 12. What special names given to group 1 ,2, 17 & 18
Q. 13. Observe following table and answer following questions
State whether A is metal or non metal,which is more reactive ? A or C.Will C be larger or smaller in size than B.which type of ion,cation or anion will be formed by A?
Q. 14. elements X of atomic no. 12 and 13 , are taken. Give there group and period no. on basis modern table. Name:-three elements having 1 valence electron two elements having +ve valency 2
Lithium , sodium ,potassium are metals that react with water to give hydrogen. What are similarity among atoms of these metals?
Q. 15. Which element has two shells,both of which are completely filled the electronic configuration is 2, 8, 7(c)
3 to 12
Which element form only covalent compounds.
which element is a metal with valency 2.
which element is nonmetal with valency of 3.out of D
and E which one has a bigger atomic radius why? write a common name for the family of element
C and F? Science (Sources of Energy)SS1
write a common name for the family of element
C and F? Science (Sources of Energy)SS1
Science (Sources of Energy)SS1
Q. 1. Why there is a need for energy?
Q. 2. Give some sources of energy?
Q. 3. Give the main categories of source of energy?
Q. 4. Which of the following are renewable and which are
non-renewable sources of energy:
Coal, wind, tides, sun, petrol, bio-mass, cng, hydro-energy
Q. 5. What is solar energy?
Q. 6. Which part of the sun’s energy is responsible for drying clothes?
Q. 7. Which component of the sun’s energy responsinle for skin cancer?
Q. 8. What are semi-conductors?
Q. 9. Define ocean thermal energy?
Q. 10. define geothermal energy?
Q. 11. Define bio-mass? Give its three examples.
Q. 12. Define anaerobic degradation?
Q. 13. Explain why a reflector is used in a solar cooker?
Q. 14. What causes a wind to blow?
Q. 15. Define solar constant?
Q. 16. Explain how the high temperature is maintained inside a solar cooker?
Q. 17. Distinguish between renewable and non-renewable sources of energy?
Q. 18. What is the use of black painted surface in solar heating device?
Q. 19. Charcoal is better fuel than wood and coal, still its use is discouraged. Why?
Q. 20. Why is bio-gas a better fuel animal dung-cakes?
Q. 21. Why is sun called the ultimate source of fossible fuels?
Q. 22. Write a short note on ‘green house effect’?
Q. 23. What are the advantages of nuclear energy?
Q. 24. Why are we looking for alternate source of energy?
Q. 25. Define bio-degradable wastes?
Q. 26. What protects us from the harmful effect of ultra violent rays of the sun?
Q. 27. Define food wed?
Q. 28. Define producers and consumers and give their examples?
Q. 29. Why do we say that flow of energy in the ecosystem is undirectional?
Q. 30. What will happen if we kill all organisms of one tropic level of food chain?
Q. 31. What is ganga action plan?
Q. 32. Define sustainable development?
Q. 33. What is the main aim of the conservation of forests?
Q. 34. Who are stakeholders?
Q. 35. What do you mean by water harvesting?
Q. 36. What do you mean by chipko movement?
Q. 37. Why do you think their should be eqitable distribution of resources?
Science (Magnetic Effects of Electric Current) SS1
Q. 1. What is a magnetic fields?
Q. 2. What are the magnetic lines of force?
Q. 3. What is an electric motor?
Q. 4. What is a solenoid?
Q. 5. Which effect of electric current is untilized in the working of an electric motor?
Q. 6. What is the frequency for a.c (altemating current) in india?
Q. 7. On what principle is an a.c generator based?
Q. 8. Why don’t two magnetic lines of force interest each other?
Q. 9. Name some source of direct current?
Q. 10. When does an electric short circuit occur?
Q. 11. What is the usual colour code followed for connecting ive, neutral and earth wires. Why is it so important?
Q. 12. What is electromagnetic inductions?
Q. 13. State the rule to determine the direction of magnetic field produced around a current carrying conductor?
Q. 14. What is the role of a fuse in the electric circuits?
Q. 15. Explain direct and alternating current?
Q. 16. What is the function of an earth wire? Why is it necessary to earth metallic appliances?
Q. 17. Draw magnetic field lines around a bar magnet?
Q. 18. Why does a compass needle get deflected brought near a bar magnet?
Q. 19. List the properties of magnetic lines of force?
Q. 20. What is the principle of an electric motor?
Q. 21. State the principle of an electric generator.
Q. 22. Two circular coils a and b are placed closed to each other. If the current in coil a is changed, will some current be induced in the coil b? Give reason.
Q. 1. Name one main ore of aluminium.
Q. 2. Why carbon tetrachloride (CC14) does not conduct electricity?
Q. 3. Which group of elements was missing in Mendeleev's Periodic Table?
Q. 4. State one point of similarity between the human eye and the camera.
Q. 5. The path of a beam of light passing through a true solution is not visible. Why?
Q. 6. The elements A, B and C are present in the same period of Periodic Table. Their atomic radii are 172 pm; 106 pm and 136 pm, respectively. Arrange these elements in increasing order of their atomic number in the period.
Q. 7. What type of bond is present in water molecule? Explain the formation of bonds between hydrogen and Oxygen to form water molecule.
Q. 8. How many electrons are associated with (i) one coulomb of charge? (ii) The flow of current of 16 mA for 100 s? (Given: charge of electron = 1.6 x 1019 C)
Q. 9. Two 'set-ups' are shown below: What are we likely to observe if the (i) Magnet in set-up (A) is kept stationary within the coil? (if) Key in set-up (B) is just 'plugged-in’?
Q. 10. Draw the pattern of the field lines of the magnetic field produced by a current carrying Straight wire,circular coil,solenoid
Q. 11. How many 440 Q resistors should be connected in parallel to draw a total current of 10 A on a 220 V line?
Q. 12. Discuss the reaction of sodium, magnesium, iron and gold with water.
Q. 13. (i) Give an example of the reaction between a non-metal oxide and sodium hydroxide. Write the balanced chemical equation also.
(ii) The pH of a solution is 4.8. What is its nature and what will be its action on blue litmus
Q. 14. The image, formed by a concave
mirror, is observed to be :-virtual, erect and larger than the object. ,Real,
inverted and larger than the object.,Real, inverted and of the same size as the
object , Real, inverted and smaller than the object.
Where is the position of the object in each of these cases? Draw ray-diagrams to justify your choices
for cases (I) and (III). Or
A beam of white light is made to fall on the three
set-ups shown here.What are we likely to observe in each of these three cases ?
How can we understand the similarity in the observations of cases(I) and (III)?
Name the seven colours linked with the observation in case (II) for sunlight
Q. 15. Write a reaction in which a gas is evolved.
What are oxidation-reduction reactions? Write
the reaction of copper oxide with hydrogen and
identify the species which is reduced.
What is rancidity? Or
Q How are chemical reactions classified? Describe the various types of reactions with the help of an example in each case.
Q. 1. What is the shape of V-I graph for an ohmic resistance?
Q. 2. Why does a compass needle get deflected when brought near a bar magnet?
Q. 3. The image formed by a convex lens is always real.
Is it true?
Q. 4. A concave mirror produces two times magnified real image of an object placed at 10 cm infront of it. Where is the image located?
Q. 5. Name two energy sources that you would consider to be renewable. Give reasons for your choices.
Q. 6. Will current flow more easily through a thick wire or thin wire of the same material when connected to the same source? Why?
Q. 7. Name two safety measures commonly used in electrical circuits and appliances. An electric oven of a 2 kw power rating is operated in a domestic circuit (220V) that has a current rating of 5 A. What result do you expect? Explain.
Q. 8. Show how will you connect three resistors, each of resistance 6 ohm, so that the combination has a resistance of (i) 9 ohm (ii) 2 ohm. Justify your answer.
Q. 9. What is myopia? List two causes for the development of myopia. Describe with ray diagrams, how this defect may be corrected by using spectacles.
Q. 1. Which is bigger, a coulomb or charge on an electron? How many electric charges form one coulomb of charge?
Q. 2. A plastic comb run through ones dry hair attracts small bits of paper. Why? What happens if the hair is wet or if it raining?
Q. 3. What is an electric lines of force? What is its importance?
Q. 4. An ebonite rod held in hand can be charged by rubbing with flannel but a copper rod can’t be charged like this. Why?
Q. 5. What is the value of charge on an electron in S.I unit? Is a charge less than this value possible?
Q. 6. Define and explain quantisation of electric charge.
Q. 7. If a body gives out 109 electrons every second. How much time is required to get a total charge of 1C for of 1C from it? [Ans: - 198.2 year ]
Q. 8. A polythene piece rubbed with wool is found to have negative charge of 3.2 x 10-7C. Calculate the number of electrons transferred.
Q. 9. How is static electricity is different from current electricity?
Q. 10. Define current electricity. What do you mean by condition electrons?
Q. 11. Define electric current. “It has both direction as well as magnitude”, than why it is a scalar?
Q. 12. What is the current flowing through a conductor if one million electrons are crossing in 1 millisecond through a cross-section of it?
Q. 13. A wire is carrying a current. Is it charged?
Q. 14. A large number of free electrons are present in metals. Why is there no current in the absence of electric field across it?
Q. 15. When we switch on an electric bulb, it light almost instantaneously, though drift velocity of electron in wires is very small. Explain.
Q. 16. Define one example, one volt of potential energy.
Q. 17. A wire having resistance R is stretched so as to reduce it’s diameter to half of its previous value, what will be it’s new resistance?
Q. 18. What will be change in resistance of a Eureka Wire, when its radius is halved and length is reduces to one-forth of its original length? Explain why an electric bulb becomes dim when an electric heater in parallel circuit is made on. Why dimness decreases after some time?
Q. 19. Long distance power transmission is carried on high voltage lines. Why?
Q. 20. Which lamp has greater resistance a: 60w or 100w lamp, when connected to the same supply?
Q. 21. A wire of resistance 4R is bent in the form of circle. What is the effective resistance between ends of diameter?
Q. 22. A wire of resistivety ζ is stretched to three times to length. What will be its new resistivety?
Q. 23. Bends in a pipe slow down the flow of water through it. Do bends in a wire increase its electrical resistance?
Q. 24. The V-I graph for conductor makes angle with V-axis. Here V denotes voltage and I denote current. What is the resistance of this conductor? [Ans: R = cot ]
Q. 25. What are the factors on which the resistance of a conductor depends? Give the corresponding relation.
Q. 26. To reduce the brightness of a light bulb, should an auxiliary resistance be connected in series with it or in parallel?
Q. 27. Current is allowed to flow in a metallic wire at a constant potential difference. When the wire becomes hot, cold water is poured on half of its portion. By doing so, its other becomes still more hot. Explain its reason.
Q. 28. What is super-conductivity? Write its two application.
Q. 29. Prove that in parallel combination of electrical applications, total power consumption is equal to the sum of the powers of the individual appliances.
Q. 30. A current in a circuit having constant resistance is tripled. How does this effect the power dissipation?
Q. 31. A wire connected to a bulb glows when same current flows through them. Why?
Q. 32. Nichrome and cupper wires of the same length and diameter are connected in series in the electric circuit in which wire, the heat will be produced at higher rate? Explain.
Q. 33. Draw V-I graph for an Ohmic and Non-Ohmic material. Give one example for each.
Q. 34. How does the resistivety of Conductor and,A semiconductor vary with temperature? Give reasons for each.
Q. 35. There is an impression among many people that a person touching a high power line gets stuck with the line. Is that true? Explain.
Q. 36. Explain why an electric bulb becomes dim when an electric heater in parallel circuit is made on. Why dimness decreases after some time?
Q. 37. Long distance power transmission is carried on high voltage lines. Why?
Science (Numericals on Current Electricity) SS1
Numericals on Current Electricity
Q. 1. Find the work done by an electron to maintain the potential difference of 80V?
Q. 2. What is the potential difference between the ends of 16Ω resistance, when a current of 1.5A flows through it?
Q. 3. The potential difference across the the terminals of an electric iron is 240V and the current is 6 A what is the resistance of electric iron?
Q. 4. If there are 108 electrons flowing across any cross section of a wire in 4 minutes, what is the current in the wire?
Q. 5. .A copper wire has diameter o.5mm and resistivity 1.6 x 10-8 ohm m what will be the length of this wire to make the resistance of 10 ohms?
Q. 6. Find the effective resistance of resistors 0.01 ohms and 107 ohms.in series and parallel
Q. 7. Two resistors of same materials has been connected in series first and then in parallel. Draw a V – I graph to distinguish these connection.
Q. 8. Three resistors 3,4,5 ohms are joined in parallel in a circuit. If a current of 150 mA=150×10-3A flows through the resistor of 4 ohms, then find the values of the current in mA which will be flowing in other two resistors?
Q. 9. A wire of length 2cm having resistance R is stretched to have an increase of 100% of original length . Find its new resistance with respect to its original resistance.
Q. 10. An electric lamp has resistance of 400 ohms. It is connected to a supply of 250V. If the price of electric energy is Rs.1.20 per unit, calculate the cost of lighting the lamp for 20 hours.
Q. 1. Name a device that helps to maintain the potential difference across a conductor
Q. 2. Why ammeter is connected in series whereas voltameter is connected in parallel through a circuit?
Q. 3. Name the material used as filament in heater and electric bulb.
Q. 4. The p.d. between the terminals of an electric heater is 60v when it draws a current of 4A from a source.What current will the heater draw if p.d is increased to 120v.
Q. 5. What are difference between resistivity and resistance?
Q. 6. If resistance of a device kept constant and p.d decreases to half , what will be effect on its current?
Q. 7. Why are coils of heating appliances are made of alloys rather than metals?
Q. 8. What is variable resistance?
Q. 9. Why rheostat is used?
Q. 10. R1 , R2 & R3 are three resisters connected in series . derive equivalent resistance for circuit. If value of two resistance are 10 ohm and 20ohm , and a current of 5A flow through circuit having p.d 12v , find value of third resistor if all 3 are connected in parallel
Q. 11. What are advantages of connecting electrical device in parallelwith the batteries instead of series?
Q. 12. State joule’ s law. An electric iron consumes energy at rate of 840w when heating is at maximum rate and 360 w when heating is at minimum.The voltage is 220v . what are the current and resistant in each case?
Q. 13. Why inert gas such as nitrogen , filled in electric filament?
Q. 14. What is fuse? What type of material should be used for it?Give the rating of fuse for a device marked as 1000w –220v.
Q. 15. What is commercial unit of electrical energy? Express it in joule.
Q. 16. For resistors of equal values are connected through p.d 220v and carry current of 5A . Find values of each resistors.
Q. 17. Why we should not connect a bulb and a heater in series ?
Q. 18. What is electric power? Give its SI unit.
Q. 19. Why aluminium and copper wires generally used for transmission of current?
Q. 20. Why high tension wires are used for long distance transmission?
Q. 21. Why does a compass needle get deflected when brought near bar magnet?
Q. 22. Define magnetic field lines . what is direction of magnetic field lines inside magnet?
Q. 23. How the relative strength of magnet is expressed?
Q. 24. What will be effect in deflection of needle if (a) current in solenoid will be changed (b) magnitude of current will increase.
Q. 25. If an electron enters in magnetic field from west direction , how will it deflect?
Q. 26. List the properties of magnetic field lines.
Q. 27. Why don’t two magnetic field intersect each other?
Q. 28. The magnetic field in a given region is uniform . Draw a diagram to represent it.
Q. 29. A alfa particle projected towards west is deflected towards north by a magnetic field.What is the direction of magnetic field?
Q. 30. What is MRI?
Q. 31. Name two organs inside our body where magnetic field produced?
Q. 32. What is electric motor? On which principle it work? Name two device in which it is used.
Q. 33. State Fleming left hand rule.
Q. 34. What is commutator?
Q. 35. What is armature? How electromagnet is more advantageous than a permanent magnet?
Q. 36. What is electro magnetic induction?
Q. 37. Give two ways to induce current in a coil
Q. 38. What are differences between AC & DC?
Q. 39. After what times AC changes its direction ? give frequency of current produced in india.
Q. 40. What is dynamo? On which principle it work?
Q. 41. What modification should do to get DC from AC generator?
Q. 42. What precaution should be taken to avoid the overloading of domestic circuits?
Q. 43. What is short circuits? When it occurs?
Q. 44. What is function of earth wire? Why it is necessary to earth a metallic appliance?
Q. 45. Give three methods of producing magnetic field.
Q. 1. Write the chemical formula of marble.
Q. 2. Why silver chloride (AgCl) turns grey in sunlight?
Q. 3. Differenciate between displacement and double displacement reactions.
Q. 4. State Modern periodic law.
Q. 5. Why atomic size decreases on moving from left to right along a period?
Q. 6. What is meant by reactivity series of metals? Will the following reaction take place? 3MnO2 (s) +4Al ----> 3Mn(l) +Al2O3(s) + heat
Q. 7. What is used as anode and cathode in electrolytic refining of metals?
Q. 8. .What is the relation between butane and 2-methyl propane?
Q. 9. Explain esterification reaction with example. How is Saponification related to it?
Q. 10. What is denatured alcohol?What causes blindness if it is consumed?
Q. 11. Write one use of ethanol and methanol each.
Q. 12. Write homologous series of alkene up to 5 carbon compound.
Q. 13. What is catenation?
Science (Metals and Non-Metals)SS1
Q. 1. Give an example of metal which:Is liquid at room temperature?Can be easily cut with knife
Q. 2. Give an example of a metal which:Is the best conductor of heat,Is a poorest conductor of heat
Q. 3. Explain the meaning of malleable and ductile?
Q. 4. What are amphoteric oxides?
Q. 5. Give example of two amphoteric oxides?
Q. 6. What is anodizing?
Q. 7. What happens when metal react with water?
Q. 8. What happens when metals react with dilute acid? Give one example.
Q. 9. What is the reactivity series of metals?
Q. 10. Why is sodium or potassium metals kept immersed in kerosene oil?
Q. 11. Define minerals and ores?
Q. 12. What is the basis of removal of the ore?
Q. 13. What is roasting?
Q. 14. What is calcination?
Q. 15. What is an amalgam?
Q. 16. Why are foodcans coated with tin metal and not zinc metal?
Q. 17. State two ways to prevent the rusting of iron?
Q. 18. What types of oxides are formed when non metals combine with oxygen?
Q. 19. Differentiate between metals and non-metals on the basis of their physical properties?
Q. 20. Explain the refining of impure copper metal?
Q. 21. Write an activity to show that ionic compounds are good conductors of electric current in their aqueous solution?
Science (Acids, Bases and Salts)SS1
Q. 1. W hat are acids?
Q. 2. What are bases?
Q. 3. What happens when sodium hydrogen carbonate reacts with dilute hcl acid?
Q. 4. What is the colour phenolphthalein indicator in an acidic solution say in dilute hcl or dilute hydrogen sulphide acid?
Q. 5. Name some natural resourses of acids and name the acids present in them?
Q. 6. Name some materials which are made from common salt (nacl, sodium chloride).
Q. 7. Write chemical fomulae of the following salts:
washing soda, baking soda and bleaching powder.
Q. 8. What do you mean by water of crystallisation?
Q. 9. Write chemical fomulae of gypsum and plaster of paris. Also write their chemical names.
Q. 10. Write some important chemical properties of acid?
Q. 11. Write some important chemical properties of bases?
Q. 12. How is baking soda produced? Write some uses of their compound?
Q. 13. How is washing soda produced?
Q. 14. How is plaster of paris prepared? Write some of its important uses.
Q. 15. Write an activity to show the reaction of acid with metal carbonates and metal hydrogen carbonate salts.
Q. 16. Why should curd and sour substances not be kept in brass and copper vessels?
Science (Chemical Reaction & Equations) SS1 =+ 2
Q. 1. What is chemical reaction?
Q. 2. Which substance is used for white-washing the walls?
Q. 3. What are exothermic reactions? Give one example also?
Q. 4. Name four types of chemical reactions?
Q. 5. What are combination reactions?
Q. 6. Give two example of combination reaction?
Q. 7. What are displacement reactions?
Q. 8. What are double displacemwent reactions?
Q. 9. What are oxidation reactions?
Q. 10. What are reduction reactions?
Q. 11. What is rancidity?
Q. 12. Why do we apply paint on iron articles?
Q. 13. What do you mean by precipitation reaction? Give its one example.
Q. 14. What is redox reactions? Give its two examples?
Q. 15. Write the balanced chemical equation of the following and identify the type of reaction in each case:Potassium bromide(aq) = barium iodide(aq) -> potassium iodide(aq) = barium bromide(s)
Zinc carbonate(s) -> zinc oxide(s) = carbon dioxide(g)
Magnesium(s) = hydrochloric acid(aq) -> magnesium chloride(aq) = hydrogen(g)
Q. 16. Wrte an activity to shw the change in the state of matter and change in temperature during a chemical reaction (change).
Q. 17. Write an activity to show the electrolysis of water, as an example of decomposition reaction
Q. 18. Name different types of chemical reaction? Define them and give their example?
Q. 19. What does the colour of copper sulphate solution change when an iron dipped in it?
Q.1. What is meant by thermal decomposition?
Q.2. Name the group of chemical substances used to prevent oxidation.
Q.3. What is the nature of oxides formed by metals and non metals?
Q.4. Name one cheap reducing agent commonly used in extraction of pure metals.
Q.5. Define catenation.
Q.6. Name the gas evolved when sodium carbonate or sodium bicarbonate reacts with ethanoic acid?
Q.7. Name the cells which regulate the opening and closing of stomata.
Q.8. Name the enzyme responsible for changing starch to sugar in mouth.
Q.9. What is scum?
Q.10. Which type of flame is produced by saturated hydrocarbons on incomplete combustion?
Q.11. What happens when calcium oxide reacts with water? Write the chemical equation of reaction involved.
Q.12. Respiration is considered an exothermic reaction. Explain?
Q.13. Give reason:- Ionic compounds conduct electricity only in molten state not in solid state.
Q.14. Explain why:-
Aluminium is more reactive than Iron, yet its corrosion is less than Iron.
Carbonate and sulphide ores are usually converted into oxides before reduction during process of e extraction.
Q.15. What is hydrogenation? Write its industrial application.
Q.16. Write the common name of ethanoic acid. What is its ditute solution (5-8%) in water known as?
Q.17. Write the functions of muscular wall in digestive tract.
Q.18. Why danger signals are red?
Q.19. Explain the phenomenon which causes twinkling of star.
Q. 20. An object of 4cm height is placed at a distance of 15cm away from a convex lens of focal length 10cm.Find the nature, size, position of the object. Find its magnification.
Q. 21. Draw and explain the ray diagram formed by a convex mirror when object is at infinity.object is at finite distance from the mirror.
Q. 22. Name three refractive defects of vision with the help of diagram. Explain the reasons and correction of these d defects.
Q. 23. Describe the urine formation in human beings. Draw a neat and labelled diagram of nephron.
Q. 24. What are soaps? Explain the mechanism of the cleaning action of soaps? Soaps form scum (insoluble s substance) with hard water. Explain why? How this problem is overcome by use of detergents?
Q. 25. What are ionic compounds. State four properties of ionic compounds in respect to their physical nature, melting and boiling points, solubility and conduction of electricity?
Q. 1. HC1 and HNO3 both produce hydrogen ions in aqueous state. With metals they produce hydrogen gas. However, as HNO3 is a strong oxidising agent, it reacts with the hydrogen gas so produced during the reaction to form water. Thus, acids such as HC1, H2SO4 produce hydrogen gas with metals whereas, HN03 produce water.
Substances that occur naturally in rocks and have their own characteristic appearance and chemical composition .
Minerals, from which the metals can be profitably extracted, are called ores.
Waste materials, which are mixed in the valuable ores.
All minerals are not ores.
All ores are minerals.
Q. 3. Metal A is Zn.
It reacts with blue copper sulphate to form
colourless zinc sulphate.
Zn(s)+CuSO4 (aq)→ ZnSO4 (aq)+Cu(s)↓
Copper sulphate zinc sulphate (Brown)
It reacts with green iron sulphate to produce colourless zinc sulphate.
Zn(s)+FeSO4 (aq)→ZnSO4 (aq)+Cu(s)↓
Ferrous sulphate zinc sulphate (Brown)
Zn does not react with aluminium hydroxide, as Zn is less reactive than Al, hence it cannot displace it.
Q. 4. Highly reactive elements are obtained by the electrolysis of their molten chlorides. The metals are deposited at the cathode, whereas, chlorine is liberated at the anode. In the case of sodium chloride, the reactions are: At cathode, Na+ + e- → Na At anode, 2Cl- → Cl2 + 2e- Similarly, aluminium is obtained by the electrolytic reduction of aluminium oxide. Carbon cannot be used for the reduction of highly reactive metals because these metals have more affinity for oxygen than carbon. These metals are obtained by electrolytic reduction.
Q. 5. Electrolytic refining is the process of obtaining metals of very high purity by depositing the desired metal and the impure metal on different electrodes in an electrolytic cell. Metal obtained after reduction are not very pure, so they need to be refined further. The most widely used method is electrolytic refining. For example, in the electrolytic refining of copper, CuSO, is acidified (to make it highly conducting) and is used as electrolyte. The anode is made of thick impure copper rod, whereas the cathode is made of thin pure copper. Anode is connected to positive terminal of the battery whereas cathode is connected to the negative terminal of the battery. Pure metal is obtained at cathode.
Q. 6. Corrosion is the wearing away, dissolving, or softening of any substance which takes place due to chemical or electrochemical reaction with its environment. It applies to the gradual action of natural agents, such as air or salt water, on metals. It is rather a slow process. Four methods used to prevent from corrosion are:
Painting, Oiling or Greasing: It protects the metal from corrosion by forming a layer, which cuts off the exposure to moisture and air.
Anodising: Process of coating a metal, e.g., aluminium, with a protective or decorative oxide by making the metal the anode of an electrolytic cell .
Galvanising: It is a method of depositing a thin layer of zinc over steel and iron objects. The galvanised articles remain protected even if zinc coating; .s broken.
Alloying: Most of the metals are used as alloys—that is, mixtures of several elements—because these are superior to pure metals in their properties and uses. Alloying is done for many reasons. It helps in increasing strength, making articles corrosion resistance, or reduce costs.
Q. 1. Name the following:
Two renewable sources of energy (non - conventional) 2
Two non-renewable or conventional sources of energy
Two forms of energy usually used at homes
The radiation emitted from a hot source
0The component of sunlight that is absorbed by the ozone layer of the atmosphere.
Two activities in our daily life in which solar energy is used [Born on January 17, 1706]
The kind of surface that absorbs maximum heat
The device that directly converts solar energy into electrical energy
The range of temperature attained inside a box-type solar cooker placed in the sun for 2-3 hours
The two elements which are used to fabricate solar cells
Q. 2. State an important characteristic of a source of energy.
Q. 3. Which component of sun's energy is responsible for drying clothes?
Q. 4. What type of energy is possessed by wind?
Q. 5. Though a hot iron emits radiations, it is not visible to us. Why?
Q. 6. What type of radiations is emitted by a 100 W electric bulb?
Q. 7. How is the conductivity of a semi-conductor affected when light falls on it?
Q. 8. What is the main cause for winds to blow?
Q. 9. What is the minimum wind speed required for generating electricity in a wind mill?
Q. 10. What is a wind farm?
Q. 11. What is the principle involved in the working of the thermal power plant?
Q. 12. What is the energy conversion involved in a thermal power plant?
Q. 13. What is Biomass?
Q. 14. What is biogas?
Q. 15. Write the principle of the windmill?
Q. 16. What are hot spots?
Q. 17. What is nuclear fission reaction?
Q. 18. A sheet of glass is used in solar heating devices. Why?
Q. 19. Explain the construction and working of a hydro-electric power plant with a neat schematic diagram.
Q. 20. Give two main differences between renewable and non-renewable sources of energy.
Science (Life Processes)SS1
Very short and short answer type questions: 1 mark and 2 marks each
Q. 1. What are nutrients?
Q. 2. Name the life process that provides energy.
Q. 3. Which process provides all living things with raw materials for energy and growth?
Q. 4. Name the essential pigment that absorbs light.
Q. 5. Can you name the gaseous raw material of photosynthesis?
Q. 6. If grana of a chloroplast are removed then, which of the reaction of will not be carried out?
Q. 7. Name the gas that is produced as a by-product during photosynthesis.
Q. 8. The function of salivary amylase is to convert Fats into fatty acids.Proteins into amino acids.
Starch into sugar. Sugar into starch
Q. 9. Artificial removal of nitrogenous wastes from the human body in the event of kidney failure is
Plasmolysis ,Dialysis ,Diffusion ,Osmosis
Q. 10. Tick the correct statement.
Arteries carry blood away from the heart while veins carry blood towards heart.
Veins carry blood away from the heart while arteries carry blood towards heart.
Both of them carry blood in the same direction.
Either of them can carry blood away from the blood.
Q. 11. Name the pore through which gaseous exchange takes place in older stems.
Q. 12. Why the blood is red ?
Q. 13. What is the functional unit of kidney?
Q. 14. Define translocation.
Q. 15. Name the vessel that brings oxygenated blood from lungs to heart.
Q. 16. Why the colour of lymph is yellow?
Q. 17. Name the reagent which is used to test the presence of starch.
Q. 18. Why walls of articles are thinner than ventricles?
Q. 19. The mode of nutrition in which digestive enzymes are secreted out side the body.
Q. 20. What is ATP?
Short answer type questions 2 marks
Q. 21. Why is the rate of breathing in terrestrial animals slower than aquatic animals?
Q. 22. The parts shown as A and B in the given diagram are
The parts shown as A and B in the given diagram are
A) A is epidermal cell, B is stomatal pore
B) A is guard cell, B is stomatal pore
C) A is epidermal cell, B is guard cell
D) A is guard cells, B is epidermal cell
Q. 23. Which activity is illustrated in the diagram of an Amoeba shown below?
Q. 24. The diagram below represents urinary system in the human body. Identify the structure through which urine leaves the urinary bladder.
Q. 25. A student covered a leaf from a destarched plant with a black paper strip and kept it in the garden outside his house in fresh air. In the evening, he tested the covered portion of the leaf for presence of starch. What the student was trying to show? Comment.
QUESTIONS OF LIFE PROCESSES SS1
Q. 1. Name the product and by product of photosynthesis.
Q. 2. In which biochemical form the photosynthate moves in phloem tissue?
Q. 3. What are the raw materials of photosynthesis?
Q. 4. What is the similarity between chlorophyll and hemoglobin?
Q. 5. Name the products of photolysis of water.
Q. 6. What are the end products of light dependant reaction?
Q. 7. Which cell organelle is the site of photosynthesis?
Q. 8. What is the difference between digestion of heterotrophs and saprotrophs?
Q. 9. Give example of two plants and two animal parasites.
Q. 10. Name the enzyme present in saliva, what is its role in digestion?
Q. 11. Which chemical is used to test for starch? Which colour shows the presence of starch?
Q. 12. Give the term- rhythmic contraction of alimentary canal muscle to propel food.
Q. 13. Name the three secretions of gastric glands.
Q. 14. What is the function of mucus in gastric gland?
Q. 15. Name the sphincter which regulates the exit of food from the stomach.
Q. 16. Give the functions of hydrochloric acid for the body.
Q. 17. What is the role of pepsin in stomach?
Q. 18. Why pancreas is called mixed gland?
Q. 19. Give two functions of bile juice, from which organ it is released?
Q. 20. Name the largest gland of our body.
Q. 21. Name any three important enzymes of pancreas and the food component on which they act.
Q. 22. Where from intestinal juice come to the small intestine?
Q. 23. What is the function of intestinal juice?
Q. 24. What are the simplest digestive product of carbohydrate, fats and protein?
Q. 25. Name the finger like projections of small intestine and what is the necessity of such type of projections in digestive system?
Q. 26. Why are intestinal villis highly vascular?
Q. 27. What is the function of anal
Q. 28. Name the site of anaerobic and aerobic respiration in a cell.
Q. 29. A three carbon compound is the common product of both aerobic and anaerobic pathway. What is that?
Q. 30. Why do we get muscle cramp after vigorous exercise?
Q. 31. Distinguish between lactic acid and alcoholic fermentation?
Q. 32. Name the energy currency molecule of cell?
Q. 33. The breathing rate of aquatic animals is high, why?
Q. 34. What is the function of mucus and fine hair in nostrils?
Q. 35. Give the function of network of capillaries on alveoli.
Q. 36. Name the main carrier of oxygen and carbon dioxide in man.
Q. 37. Why does haemoglobin molecule act as efficient carrier of oxygen than diffusion process?
Q. 38. Give example of any three substances transported by plasma.
Q. 39. Name the organ that- (a) pushes blood around body (b) make blood to reach to tissues.
Q. 40. Name the blood vessel that carries blood from heart to lungs and from lungs to heart.
Q. 41. How many heart chambers are there in (a) fish (b) frog (c) lizard (d) crocodile (e) birds (f) man?
Q. 42. Name the device that measures blood pressure.
Q. 43. What is the normal blood pressure of man?
Q. 44. Why capillaries are thin walled?
Q. 45. Which cell of blood help in wound healing?
Q. 46. What is the other name of lymph?
Q. 47. Give two function of lymph.
Q. 48. .What is the direction of flow of water in xylem and food in phloem?
Q. 49. Why do plants need less energy than animals?
Q. 50. Which process acts as suction to pull water from xylem cells of roots.
Q. 51. Mention two functions of transpiration.
Q. 52. What are the two substances transported through phloem tissue?
Q. 53. Name the food component whose digestion produce
Q. 54. Which is the functional unit of kidney?
Q. 55. What is the cup shaped structure of nephron called?
Q. 56. Which materials are selectively reabsorbed by nephron tubule?
Q. 57. What are the two important functions of kidney.
Q. 58. What is the other name of artificial kidney?
MULTIPLE CHOICE QUESTIONS
Q. 1. A key molecule NOT found in a chloroplast is...
1. Chlorophyll 2.Carbon dioxide3. Water 4.Steroids
Q. 2. Photosynthesis is a good example of...1Catabolism2 Anabolism
Q. 3. Chloroplasts are found in heterotrophic cells. 1 True 2False
Q. 4. Which of these choices is NOT in the structure of a chloroplast?
1Granum2 Stroma3 Cristae4 Thylakoid
Q. 5. Only plants can conduct photosynthesis with chloroplasts.1True 2.False
Q. 6. Chloroplasts convert solar energy into physical energy 1True2 False
Q. 7. There is only one type of chlorophyll found in chloroplasts.1True2 False
Q. 8. Because plants have chloroplasts to generate glucose, they are self-sufficient.1True2 False
Q. 8. A chloroplast is made of cellulose.1True2 False
Science (Life Processes) SS1+2
Q. 1. What are life processes?
Q. 2. What outside raw materials, are used for life by an organism?
Q. 3. What are enzymes?
Q. 4. Explain the action of saliva secreted from salivary glands on the food?
Q. 5. Name the common process, both in the aerobic and anaerobic respirations?
Q. 6. Name the products produced by the fermentation of glucose by the yeast cell?
Q. 7. Why is it necessary to separate oxygenated and deoxygenated blood in mammals and birds?
Q. 8. Name the functional unit of human kidney?
Q. 9. The xylem in plants is responsible for __________.
Q. 10. Define photosynthesis?
Q. 11. What substances are contained in the gastric juice? What are their function?
Q. 12. What are the various processes that take place in the duodenum?
Q. 13. What the different types of hetrotropic nutrition?
Q. 14. Show by experiment that sunlight is necessary for photosyntheses?
Q. 15. Name the type of resperation in which the end products are: _______________.
Q. 16. Describe the process of anaerobic resperation?
Q. 17. Distinguish between breathing and respiration?
Q. 18. Differentiate between artery and vein?
Q. 19. Give examples of solid, liquid and gaseous wastes in plants?
Q. 20. Explain the nutrition process in amoeba?
Q. 21. Write important functions of blood?
Q. 22. Describe double circulation in human beings. Why is it necessary?
Q. 23. Compare the functioning of an alveolie in lungs and nephrons in the kidneys with respect to their structure and functioning?
Q. 24. Explain the mechanism of the circulation of blood in human body?
Q. 25. What criteria do we use to decide wheather something is alive?
Q. 26. What is the role of acid in our stomach?
Q. 27. How is oxygen and carbon dioxidetransported in human beings?
Q. 28. What are the component of the transport system in human beings? What are the functional of these components?
Q. 29. What are the components of the transport system in highly organised plants?
Q. 30. Describe the structure and functioning of nephrons?
Q. 31. How is the amount of urine produced regulated?
Q. 32. What is the role of saliva in the digestion of food?
Q. 1. why is dna copying essential for reproduction?
Q. 2. what is the role of a catalyst in a chemical reaction?
Q. 3. which class of compounds gives a positibve fehling’s test? Ans:aldehydes
Q. 4. what is meant by 1 volt?
Q. 5. write the chemical name and formula for bleaching powder?
Q. 6. What is meant by dehydrating agent?
Q. 7. state ohm’s law.draw a schematic diagram of the circuit for studying ohm’s law?
Q. 8. state one limitation for the usage of solar energy?
Q. 9. what is a gene and where is it located?
Q. 10. explain double fertilization in plants?
Q. 11. What is meant by magnetic field?
Q. 12. why chlorophyll green in colour?
Q. 13. give two methods to prevent rusting?
Q. 14. state the functions of gastric glands? 15.explain why stars twinkle? | http://bhatiasir.yolasite.com/x-maths-scinece-questions.php | 13 |
54 | Algebra: In Simplest Terms
In this series, host Sol Garfunkel explains how algebra is used for solving real-world problems and clearly explains concepts that may baffle many students. Graphic illustrations and on-location examples help students connect mathematics to daily life. The series also has applications in geometry and calculus instruction.
1. Introduction—An introduction to the series, this program presents several mathematical themes and emphasizes why algebra is important in today’s world.
2. The Language of Algebra—This program provides a survey of basic mathematical terminology. Content includes properties of the real number system and the basic axioms and theorems of algebra. Specific terms covered include algebraic expression, variable, product, sum term, factors, common factors, like terms, simplify, equation, sets of numbers, and axioms.
3. Exponents and Radicals—This program explains the properties of exponents and radicals: their definitions, their rules, and their applications to positive numbers.
4. Factoring Polynomials—This program defines polynomials and describes how the distributive property is used to multiply common monomial factors with the FOIL method. It covers factoring, the difference of two squares, trinomials as products of two binomials, the sum and difference of two cubes, and regrouping of terms.
5. Linear Equations—This is the first program in which equations are solved. It shows how solutions are obtained, what they mean, and how to check them using one unknown.
6. Complex Numbers—To the sets of numbers reviewed in previous lessons, this program adds complex numbers — their definition and their use in basic operations and quadratic equations.
7. Quadratic Equations—This program reviews the quadratic equation and covers standard form, factoring, checking the solution, the Zero Product Property, and the difference of two squares.
8. Inequalities—This program teaches students the properties and solution of inequalities, linking positive and negative numbers to the direction of the inequality.
9. Absolute Value—In this program, the concept of absolute value is defined, enabling students to use it in equations and inequalities. One application example involves systolic blood pressure, using a formula incorporating absolute value to find a person’s “pressure difference from normal.”
10. Linear Relations—This program looks at the linear relationship between two variables, expressed as a set of ordered pairs. Students are shown the use of linear equations to develop and provide information about two quantities, as well as the applications of these equations to the slope of a line.
11. Circle and Parabola—The circle and parabola are presented as two of the four conic sections explored in this series. The circle, its various measures when graphed on the coordinate plane (distance, radius, etc.), its related equations (e.g., center-radius form), and its relationships with other shapes are covered, as is the parabola with its various measures and characteristics (focus, directrix, vertex, etc.).
12. Ellipse and Hyperbola—The ellipse and hyperbola, the other two conic sections examined in the series, are introduced. The program defines the two terms, distinguishing between them with different language, equations, and graphic representations.
13. Functions—This program defines a function, discusses domain and range, and develops an equation from real situations. The cutting of pizza and encoding of secret messages provide subjects for the demonstration of functions and their usefulness.
14. Composition and Inverse Functions—Graphics are used to introduce composites and inverses of functions as applied to calculation of the Gross National Product.
15. Variation—In this program, students are given examples of special functions in the form of direct variation and inverse variation, with a discussion of combined variation and the constant of proportionality.
16. Polynomial Functions—This program explains how to identify, graph, and determine all intercepts of a polynomial function. It covers the role of coefficients; real numbers; exponents; and linear, quadratic, and cubic functions. This program touches upon factors, x-intercepts, and zero values.
17. Rational Functions—A rational function is the quotient of two polynomial functions. The properties of these functions are investigated using cases in which each rational function is expressed in its simplified form.
18. Exponential Functions—Students are taught the exponential function, as illustrated through formulas. The population of Massachusetts, the “learning curve,” bacterial growth, and radioactive decay demonstrate these functions and the concepts of exponential growth and decay.
19. Logarithmic Functions—This program covers the logarithmic relationship, the use of logarithmic properties, and the handling of a scientific calculator. How radioactive dating and the Richter scale depend on the properties of logarithms is explained
20. Systems of Equations—The case of two linear equations in two unknowns is considered throughout this program. Elimination and substitution methods are used to find single solutions to systems of linear and nonlinear equations.
21. Systems of Linear Inequalities—Elimination and substitution are used again to solve systems of linear inequalities. Linear programming is shown to solve problems in the Berlin airlift, production of butter and ice cream, school redistricting, and other situations while constraints, corner points, objective functions, the region of feasible solutions, and minimum and maximum values are also explored.
22. Arithmetic Sequences and Series—When the growth of a child is regular, it can be described by an arithmetic sequence. This program differentiates between arithmetic and nonarithmetic sequences as it presents the solutions to sequence- and series-related problems
23. Geometric Sequences and Series—This program provides examples of geometric sequences and series (f-stops on a camera and the bouncing of a ball), explaining the meaning of nonzero constant real number and common ratio.
24. Mathematical Induction—Mathematical proofs applied to hypothetical statements shape this discussion on mathematical induction. This segment exhibits special cases, looks at the development of number patterns, relates the patterns to Pascal’s triangle and factorials, and elaborates the general form of the theorem.
25. Permutations and Combinations—How many variations in a license plate number or poker hand are possible? This program answers the question and shows students how it’s done.
26. Probability—In this final program, students see how the various techniques of algebra that they have learned can be applied to the study of probability. The program shows that games of chance, health statistics, and product safety are areas in which decisions must be made according to our understanding of the odds.
Use our classroom videos for every curriculum and every grade level.
These programs have been dropped from the Instructional Resources offerings.
Sign up for our monthly e-Newsletter to learn about new programs, classroom resources, and professional development opportunities!
June - Tri-City Technology Conference in Fargo, North Dakota
June 8 - Share a Story event at Rheault Farm from 10:30 am to 4:00 pm
June 14 - Midwest KidsFest at Island Park in Fargo, North Dakota from 11:00 am to 7:00 pm
June 25-26 - Prairie Region Teacher Training Institute in Moorhead, Minnesota at Concordia College | http://www2.prairiepublic.org/education/instructional-resources?post=13951 | 13 |
54 | Trigonometry functions - introduction
There are six functions that are the core of trigonometry.
There are three primary ones that you need to understand completely:
- Sine (sin)
- Cosine (cos)
- Tangent (tan)
The other three are not used as often and can be derived from the three primary functions.
Because they can easily be derived, calculators and spreadsheets do not usually have them.
- Secant (sec)
- Cosecant (csc)
- Cotangent (cot)
All six functions have three-letter abbreviations (shown in parentheses above).
Definitions of the six functions
on the left.
For each angle P or Q, there are six functions, each function is the
ratio of two sides of the triangle.
The only difference between the six functions is which pair of sides we use.
In the following table
- a is the length of the side adjacent to the angle (x) in question.
- o is the length of the side opposite the angle.
- h is the length of the
"x" represents the measure of ther angle in either degrees or radians.
For example, in the figure above, the cosine of x is the side adjacent to x (labeled a), over the
If a=12cm, and h=24cm, then cos x = 0.5 (12 over 24).
Soh Cah Toa
These 9 letters are a memory aid to remember the ratios for the three primary functions - sin, cos and tan.
Pronounced a bit like "soaka towa".
The ratios are constant
Because the functions are a ratio
of two side lengths, they always produce the same result for a given angle,
regardless of the size of the triangle.
In the figure on the left, drag the point C. The triangle will adjust to keep the angle C at 30°.
Note how the ratio of
the opposite side to the hypotenuse does not change, even though their lengths do.
Because of that, the sine of 30° does not vary either. It is always 0.5.
Remember: When you apply a trig function to a given angle, it always produces the same result.
For example tan 60° is always 1.732.
Using a calculator
Most calculators have buttons to find the sin, cos and tan of an angle.
Be sure to set the calculator to degrees or radians mode depending on what units you are using.
For each of the six functions there is an inverse function that works in reverse.
The inverse function has the letters 'ARC' in front of it.
For example the inverse function of COS is ARCCOS. While COS tells you the cosine of an angle,
ARCCOS tells you what angle has a given cosine.
See Inverse trigonometric functions.
On calculators and spreadsheets, the inverse functions are sometimes
written acos(x) or cos-1(x).
Trigonometry functions of large and/or negative angles
The six functions can also be defined in a rectangular
This allows them to go beyond right triangles, to where the angles can have any measure,
even beyond 360°, and can be both positive and negative. For more on this see
Trigonometry functions of large and negative angles.
Identities - replacing a function with others
Trigonometric identities are simply ways of writing one function using others. For example, from the table above we see that
This equivalence is called an identity.
If we had an equation with sec x in it, we could replace sec x
one over cos x if that helps us reach our goals.
There are many such identities. For more see Trigonometric identities.
Not just right triangles
These functions are defined using a right triangle, but they have uses in other triangles too.
For example the
Law of Sines and the
Law of Cosines can be used to
solve any triangle - not just right triangles.
Graphing the functions
The functions can be graphed, and some, notably the SIN function, produce shapes that frequently occur in nature.
For example see the graph of the SIN function, often called a sine wave, on the right.
For more see
Pure audio tones and radio waves are sine waves in their respective medium.
Derivatives of the trig functions
Each of the functions can be differentiated in calculus.
The result is another function that indicates its rate of change (slope) at a particular values of x.
These derivative functions are stated in terms of other trig functions.
For more on this see
Derivatives of trigonometric functions.
See also the Calculus Table of Contents.
Other trigonometry topics
Solving trigonometry problems
(C) 2009 Copyright Math Open Reference. All rights reserved | http://www.mathopenref.com/trigfunctions.html | 13 |
56 | The Purplemath Forums
The Remainder TheoremThe Remainder Theorem is useful for evaluating polynomials at a given value of x, though it might not seem so, at least at first blush. This is because the tool is presented as a theorem with a proof, and you probably don't feel ready for proofs at this stage in your studies. Fortunately, you don't "have" to understand the proof of the Theorem; you just need to understand how to use the Theorem.
The Remainder Theorem starts with an unnamed polynomial p(x), where "p(x)" just means "some polynomial p whose variable is x". Then the Theorem talks about dividing that polynomial by some linear factor x – a, where a is just some number. Then, as a result of the long polynomial division, you end up with some polynomial answer q(x) (the "q" standing for "the quotient polynomial") and some polynomial remainder r(x).
As a concrete example of p, a, q, and r, let's look at the polynomial p(x) = x3 – 7x – 6, and let's divide by the linear factor x – 4 (so a = 4):
So we get a quotient of q(x) = x2 + 4x + 9 on top, with a remainder of r(x) = 30.
You know, from long division of regular numbers,
that your remainder (if there is one) has to be smaller than whatever you divided by. In polynomial
terms, since we're dividing by a linear factor (that is, a factor in which the degree on x
is just an understood "1"), then the remainder must be a constant value. That
is, when you divide by "x
– a", your remainder will just be
The Remainder Theorem then points out the connection between division and multiplication. For instance, since 12 ÷ 3 = 4, then 4 × 3 = 12. If you get a remainder, you do the multiplication and then add the remainder back in. For instance, since 13 ÷ 5 = 2 R 3, then 13 = 5 × 2 + 3. This process works the same way with polynomials. That is:
If p(x) / (x – a) = q(x) with remainder r(x),
then p(x) = (x – a) q(x) + r(x).
(Technically, this "if - then" statement is the "Division Algorithm for Polynomials". But the Algorithm is the basis for the Remainder Theorem.)
In terms of our concrete example: Copyright © Elizabeth Stapel 2002-2011 All Rights Reserved
Since (x^3 – 7x – 6) / (x – 4) = x2 + 4x + 9 with remainder 30,
then x3 – 7x – 6 = (x – 4) (x2 + 4x + 9) + 30.
The Remainder Theorem says that we can restate the polynomial in terms of the divisor, and then evaluate the polynomial at x = a. But when x = a, the factor "x – a" is just zero! Then evaluating the polynomial at x = a gives us:
p(a) = (a – a)q(a)
But remember that the remainder term r(a) is just a number! So the value of the polynomial p(x)
p(4) = (4 – 4)((4)2
+ 4(4) + 9) + 30
But you gotta think: Okay, fine; the value of the polynomial p(x) at x = a is the remainder r(a) when you divide by x – a, but who wants to do the long division each time you have to evaluate a polynomial at a given value of x?!? You're right; this would be overkill. Fortunately, that's not what they really want you to do.
When you are dividing by a linear factor, you don't "have" to use long polynomial division; instead, you can use synthetic division, which is much quicker. In our example, we would get:
Note that the last entry in the bottom row is 30, the remainder from the long division (as expected) and also the value of p(x) = x3 – 7x – 6 at x = 4. And that is the point of the Remainder Theorem: There is a simpler, quicker way to evaluate a polynomial p(x) at a given value of x, and this simpler way is not to evaluate p(x) at all, but to instead do the synthetic division at that same value of x. Here are some examples:
First off, even though the Remainder Theorem refers to the polynomial and to long division and to restating the polynomial in terms of a quotient, a divisor, and a remainder, that's not actually what I'm meant to be doing. Instead, I'm supposed to be doing synthetic division, using "3" as the divisor:
Since the remainder (the last entry in the bottom row) is 112, then the Remainder Theorem says that:
f (3) = 112.
I need to do the synthetic division, remembering to put zeroes in for the powers of x that are not included in the polynomial:
Since the remainder is 1605, then, thanks to the Remainder Theorem, I know that:
f (–5) = 1605.
For x = 2 to be a zero of f (x), then f (2) must evaluate to zero. In the context of the Remainder Theorem, this means that my remainder, when dividing by x = 2, must be zero:
The remainder is not zero. Then x = 2 is not a zero of f (x).
For x = –4 to be a solution of f (x) = x6 + 5x5 + 5x4 + 5x3 + 2x2 – 10x – 8 = 0, it must be that f (–4) = 0. In the context of the Remainder Theorem, this means that the remainder, when dividing by x = –4, must be zero:
The remainder is zero. Then x = –4 is a solution of the given equation. | http://www.purplemath.com/modules/remaindr.htm | 13 |
139 | An algorithm is a clearly described procedure that tells you how to
solve a well defined problem.
- Your knowledge will be tested in CSci 202, and all dependent CSci classes.
- Solving problems is a bankable skill and algorithms help you solve problems.
- Job interviews test your knowledge of algorithms.
- You can do things better if you know a better algorithm.
- Inventing and analyzing algorithms is a pleasant hobby.
- Publishing a novel algorithm can make you famous. Several Computer Scientists
started their career with a new algorithm.
A clearly described procedure is made of simple clearly difined steps. It often includes
making decisions and repeating earlier steps. The procedure can be complex, but the steps
must be simple and unambiguous. An algorithm describes how to solve a
problem in steps that a child could follow without understanding the
Algorithms solve problems. Without a well defined problem an algorithm is not much use.
For example, if you are given a sock and the problem is to find a matching sock in a pile of similar socks,
then an algorithm is to
take every sock in turn and see if it matches the given sock, and if so stop. This
algorithm is called a linear search. If the problem is that you are given a pile of
sockes and you have to sort them all into pairs then a different algorithm is needed.
Exercise: get two packs of cards an shuffle each one. Take one card from the first pack
and search through the other pack to find the matching card. Repeat this until
you get a feel for how it performs.
If the problem is putting 14 socks away in matching pairs then one
algorithm is to
- clear a place to layout the unmatched socks,
- for each sock in turn,
- compare it to each unmatched sock and if they match put
them both away as a pair. But, if the new sock matches none of the unmatched
socks, add it to the unmatched socks.
I call this the "SockSort" algorithm.
Exercise. Get a pack of card and extract the Ace,2,3, .. 7 of the hearts and
clubs. Shuffle them together. Now run the SockSort algorithm to pair them
up by rank: Ace-Ace, 2-2, ... .
If you had an array of 7 numbered boxes you could sort the cards
sorting the socks. Perhaps I should put numbered tags on my socks?
Note: some times a real problem is best avoided rather than solved.
So, changing what we are given, changes the problem, and so changes the
best algorithm to solve it.
Here is another example. Problem: to find a person's name in a telephone
directory. Algorithm: use your finger to
split the phone book in half. Pick the half that
contains the name. Repeatedly, split the piece of the phone book
that contains the name in half, ... This is a binary search algorithm.
Here is another example problem using the same data as the previous one: Find my
neighbor's phone number. Algorithm: look up my address using binary search,
then read the whole directory looking for the address that is closest to mine.
This is a linear search.
If we change the given data, the problem of my neighbor's phone number has a
much faster algorithm. All we need is copy of the census listing of people by
street and it is easy to find my neighbor's name and so phone number.
A problem best expressed in terms of
We either use a structured form of English with numbered steps or a
pseudocode that is based on a programming language.
- Givens: What is there before the algorithm?
- Goals: What is needed?
- Operations: What operations are permitted?
The word is a corruption of the name of a mathematician born in Khwarizm in
Uzbekistan in the 820's(CE). Al-Khwarizmi (as he was known) was one of the
first inducted into the "House of Wisdom" in Baghdad where he worked on algebra,
arithmetic, astronomy, and solving equations. His work had a strongly
computational flavor. Later, in his honor, medieval mathematicians in Europe
called similar methods "algorismic" and then, later, "algorithmic".
[pages 113-119, "The Rainbow of Mathematics", Ivor Grattan-Guiness, W W Norton
& Co, NY, 1998]
If you are using C++ and understand the ideas used in the C++ <algorithm>
library you can save a lot of time. Otherwise you will have to reinvent the
wheel. As a rule, the standard algorithm will be faster than anything you could
quickly code to do the same job. However, you still need to understand the
theory of algorithms and data structures to be able to use them well.
- Know the definition of an algorithm above.
- Know how to express a problem.
- Recognize well known problems.
- Recognize well known algorithms.
- Match algorithms to problems.
- Describe algorithms informally and in structured English/pseudocode
- Walk through an algorithm, step by step, for a particular problem by hand.
- Code an algorithm expressed in structured English.
- Know when to use an algorithm and where they relate to objects and classes,
Bjarne Stroustroup, the developer of C++, has written a very comprehensive and deep introduction to
the standard C++ library as part of his classic C++ book. This book is in the CSUSB library.
As a general rule: a practical programmer starts searching the library
of their language for known algorithms before "reinventing the wheel".
No... unless the algorithm is simple or the author very good at writing.
It helps if you know many algorithms. The same ideas turn up in many
I find that a deck of playing cards is very helpful for sorting and searching
A pencil and paper is useful for doing a dry run of an algorithm. With a group,
chalk and chalk board is very helpful.
Programming a computer just to test an algorithm tends to waste time unless you use the program
to gather statistics on the performance of the algorithm.
When there are loops it is well worth looking for things that the body
of the loop does not change. They are called invariants. If an invariant
is true before the loop, then it will be true afterward as well. You can
often figure out precisely what an algorithm does by noting what it does not
Probably the simplest algorithm worth know solve the problem pf swapping the
values of two locations or variables. Here you are given two variables of loctations
p and q that hold the same type of data, and you need to swap them. This looks trivial
but in fact we need to use an extra temporary variable t:
Algorithm to Swap p and q:
- SET t = p
- SET p = q
- Set q = t
The two classic problems are called
given a collection of objects and need to find one that passes some test. For
example: finding the student whose student Id number ends "1234". In
we are given a collection and need to reorganize it according to some rule. An
example: placing a grade roster in order of the last four digits of student
Finding the root of an equation: is this a search or a sort?
You can't find things in your library so you decide to place the books in a
particular order: is this a search or a sort?
Optimization problems are another class of problems that have attracted a lot of
attention. Here we have a large number of possibilities and want to pick the
best. For example: what dress is the most attractive in a given price range?
What is the shortest route home from the campus? What is the smallest amount of
work needed to get a B in this class?
- (linear): a linear algorithm does something, once, to every object in turn in a
collection. Examples: looking for words in the dictionary that fit into a
crossword. Adding up all the numbers in an array.
- (divide_and_conquer): the algorithms divide the problem's data into pieces and
work on the pieces. For example conquering Gaul by dividing it into three
pieces and then beating rebellious pieces into submission. Another example is
the Stroud-Warnock algorithm for displaying 3D scenes: the view is divided into four
quadrants, if a quadrant is simple, it is displayed by a simple process, but if
complex, the same Stroud-Warnock algorithm is reapplied to it. Divide-and-conquer algorithms
work best when their is a fast way to divide the problem into sub-problems that
are of about the same difficulty. Typically we aim to divide the data into
equal sized pieces. The closer that we get to this ideal the better the algorithm.
As an example, merge-sort splits an array into nearly-equal halves, sorts each of them and then
merges the two into a single one. On the other hand, Tony Hoare's Quicksort
and Treesort algorithms make a rough split into two parts that can be sorted and rapidly
joined together. On average each split is into equal halves and the algorithm
performs well. But in the worst case, QuickSort split the data into a single
element vs all the rest, and so performs slowly. So, divid_and_conquer
algorithms are faster with precise
divisions, but can perform very badly on some data if you can not guarantee a 50-50
- (binary): These are a special divide_and_conquer
algorithm where we divide the data into two equal halves.
The classic is binary search for hunting lions: divide the
area into two and pick a piece that contains a lion.... repeat. This leads to
an elegant way to find roots of equations.
ALGORITHM to find the integer lo that is just below the square root
of an integer n (√n).
Note: this algorithm needs checking out!
- SET low = 0 and high = n, (now low<= √ n < high).
- SET mid = (low + high )/ 2, (integer division)
- IF mid * mid > n THEN SET high = mid
- ELSE SET lo =mid.
- END IF (again low<=√ n < high)
- IF low < high -1 THEN REPEAT from step 2 above.
- (Greedy algorithms): try to solve problems by selecting the best piece first and
then working on the other pieces later. For example, to pack a knapsack, try
putting in the biggest objects first and add the smaller one later. Or to find
the shortest path through a maze, always take the shortest next step that you
can see. Greedy algorithms don't always produce optimal solutions, but often
give acceptable approximations to the optimal solutions.
- (Iterative algorithms): start with a value and repeatedly change it in the
direction of the solution. We get a series of approximations to the answer. The
algorithm stops when two successive values get close enough. For example:
algorithm for calculating approximate square roots.
ALGORITHM Given a positive number a and error epsilon calculate the square root of a:
- SET oldv=a
- SET newv=(1+a)/2
- WHILE | oldv - newv | > epsilon
- SET oldv =newv
- SET newv =(a+oldv * oldv)/(2*oldv)
- END WHILE
END ALGORITHM (newv is now within epsilon of the square root of a)
Exercise. Code & test the above.
NO! Take CSCI546 to see what problems are unsolvable and why this is so.
As a quick example: No algorithm can exist for finding the bugs in any given
Optimization problems often prove difficult to solve: consider finding the
highest point in Seattle without a map in dense fog...
First they were expressed in natural language: Arabic, Latin, English, etc.
In the 1950s we used flow charts. These where reborn as Activity Diagrams in
the Unified Modeling Language in the 2000s.
Boehm and Jacopini proved in the 1960's that all algorithms can constructed
using three structures
From 1964 onward we used "Structured English" or Pseudo-code. Structured
English is English
with "Algol 60" structures. "Algol" is the name of the "Algorithmic Language"
of that decade. I have a page
[ algorithms.html ]
of algorithms written in a C++based Pseudo-code.
- Selection -- if-then-else, switch-case, ...
- Iteration -- while, do-while, ...
Here is a sample of structured English:
clear space for unmatched socks
FOR each sock in the pile,
FOR each unmatched sock UNTIL end or a match is found
IF the unmatched sock matches the sock in your hand THEN
form a pair and put it in the sock drawer
IF sock in hand is unmatched THEN
put it down with the unmatched socks
Algorithms often appear inside the methods in a class. However some
algorithms are best expressed in terms of interacting objects. So,
a method may be defined in terms of a traditional algorithm or as
a system of collaborating objects.
You need an algorithm any time there is no simple sequence of steps
to code the solution to a problem inside a function.
It is wise to either write at an algorithm or use the UML to sketch out
the messages passing between the objects.
Algorithms can be encapsulated inside objects. If you
create an inheritance hierarchy, you can organize a set of
objects each knowing a different algorithm (method). You can
then let the program choose its algorithm at run time.
Algorithms are used in all upper division computer science classes.
I write my algorithm, in my program, as a sequence of comments.
I then add code that carries out each step -- under the comment
that describes the step.
First there is no algorithm for writing algorithms! So here are some hints.
- The more algorithms you know the easier it is to pick one that fits, and the
more ideas you have to invent a new one.
Take CSci classes and read books.
- Look on the WWW or in the Library
- Try doing several simple cases by hand, and observing what you do.
- Work out a 90% correct solution, and add IF-THEN-ELSEs to fix up the special
- Go to see a CSCI faculty member: this is free when you are a student.
Once in the real world you have to hire us as consultants.
- Often a good algorithm may need a special device or data structure to work.
For example, in my office I have multi-pocket folder with slots labeled with
dates. I put all the papers for a day in its slot, and when the day comes I
have all the paperwork to hand. CSCI330 teaches a lot of useful data
structures and the C++ library has a dozen or so.
- Think! This is hard work but rewarding when you succeed.
First, try doing it by hand. Second, discuss it with a colleague.
Third, try coding and running it in a program.
Fourth, go back and prove that your algorithm must work.
You should use known algorithms whenever you can and always state where they came from.
Put this citation as a comment in your code. First this is honest. Second this
makes it easier to understand what the code is all about.
Check out the text books in the Data Structures and Algorithms classes
in the upper division of our degree programs.
John Mongan and Noah Suojanen
have written "Programming Interviews exposed:
Secrets to landing your next job" (Wiley 2000, ISBN 0-471-38356-2). Most
chapters are about problem solving and the well known algorithms involved.
's multi-volume "Art of Computer Programming" founded the study of
algorithms and how to code and analyze them. It remains an incredible resource
for computer professionals. All three volumes are in our library.
's two books of "Programming Pearls" are lighter than Knuth's work
but have lots of good examples and advice for programmers and good discussion of
algorithms. They are in the library.
G H Gonnet and R. Baeza-Yates
wrote a very comprehensive "Handbook of
Algorithms and Data Structures in Pascal and C" (Addison-Wesley 1991)
which still my favorite resource for detailed analysis of known
algorithms. There is a copy in my office.... along with other
The Association for Computing Machinery (ACM)
started publishing algorithms in a special
supplement (CALGO) in the 1960s. These are algorithms involving numbers. So
they tend to be written in FORTRAN.
Yes -- lots. The Wikipedia, alone, has dozens of articles on
particular algorithms, on the theory of algorithms, and on classes of algorithms.
Step by step you translate each line of the algorithm into your target language.
Ideally each line in the algorithm becomes two or three lines of code. I like
to leave the steps in my algorithm visible in my code as comments.
Do this in pencil or in an editor. You will need to make changes in it!
You know that movies are rated by using stars or "thumbs-up". Rating an
algorithm is more complex, more scientific, and more mathematical.
In fact, we label algorithms with a formula, like this, for example:
The linear search algorithm is order big_O of n
or in short
Linear search is O(n)
This means: for very large values of n the search can not take more than
a linear time (plus some small ignored terms) to run.
On the other hand we can show:
A Binary search is O(log n)
The above formulas tell us that if we make n large enough then binary
search will be faster than linear search.
As another example, a linear search is in O(n) and the Sock-Sort (above)
is in O(n squared). This means that the linear search is better than
the sock_search. But in what way? This takes a little explanation.
Originally (in the 40's through to the mid-60's)
we worked out the average number of steps to solve a problem. This
tended to be correlated with the time. However we found (Circa 1970) two
problems with this measure.
First, it is often hard to calculate the averages. Knuth spends pages deriving the
average performnce of Euclid's algorithm -- which is only 5 lines long!
Second, the average depends on how the data is
distributed. For example, a linear search is fast if we are likely to find the
target at the start of the search rather than at the end. Worse,
certain sorting algorithms work quickly on average but sometimes are
example the beginner's Bubble Sort is fast when most of the data
is in sequence. But Quick Sort can be horribly slow on the same data.... but
for other re-orderings of the same data quick sort is much better than
bubble sort. So before you can work out an average you have to know
the frequencies with which different data appear. Typically we
don't know this distribution.
These days a computer scientist judges an algorithm by its worst case behavior,
We are pessimistic, with good reason. Many of us have implemented an algorithm
with a good average performance on random data and had users complain because
their data is not random but in the worst case possible. This happened, for
example, to Jim Bentley of AT&T [Programming Pearls above]. In other words we choose
the algorithm that gives the best guarantee of speed, not the best average
performance. It is also easier to figure out the worst case
performance than the average -- the math is easier.
The second complication for comparing algorithms is how much data to consider.
Nearly all algorithms have different times depending on the amount of data.
Typically the performance on small amounts of data is more erratic than for large
amounts of data. Further, small amounts of data don't make a big delays
that the users will notice. So, it is best to consider large amounts of data.
Luckily, mathematicians have a tool kit for working with larger numbers.
It is called
This is the calculus of what functions look like for large
values of their arguments. It simplifies the calculations
because we only need to look at the higher order terms in the formula.
Finally, to get a standard "measure" of an algorithm, independent
of its hardware, we need to eliminate the speed of the processor from our comparison.
We do this by removing all constant factors out of our formula to
give the algorithm its rating.
We talk about the order of an algorithm and use a big letter O to symbolize
To summarize: To work out the order of an algorithm, calculate
- the number of simple steps
- in the worst case
- as a formula including the amount of data
- for only large amounts of data
- including only the most important term
- and ignoring constants
For example, when I do a sock-search, with n socks, in the worst case I would
have to layout n/2 unmatched socks before I found my first match, and finding
the match I would have to look at all n/2 socks. Then I'd have to look
at the remaining n/2 -1 socks to match the next one, and then n/2-2, ...
So the total number of "looks" would be
1 + 2 + 3 + ... + n/2
(1 + n/2) * ( n/ 4)
n/4 + n^2/8
Simplifying by ignoring the n/4 term and the constant (1/8) we get
Here is listing of typical ratings from best to worst...:
|Logarithmic||O(log n)||Good search algorithm
|Linear||O(n)||Bad Search algorithms
|n log n||O(n * log n)||Good sort algorithms
|n squared||O(n^2)||Simple sort algorithms
|Cubic||O(n^3)||Normal matrix multiplication
|Polynomial||O(n^p)||good solutions to bad problems
|Exponential||O(2^n)||Not a good solution to a bad problem
|Factorial||O(n!)||Checking all permutations of n objects.
Note: I wrote n^2, n^3, n^p, 2^n, etc. to indicate powers/superscripts.
p is some power > 1.
Here is a picture graphing some typical Os:
Notice how the worst functions may start out less than the better
ones, but that they always end up being bigger.
Most algorithms for simple problems have a worst case times that are a power of n
times a power of log n. The lower the powers, the better the algorithm is.
There exist thousands of problems where the best solutions we have found,
however, are exponential -- O(2^n)! Why we have failed to improve on this is one of the
big puzzles of Computer Science.
Please read and study this page:
[ 000957.html ]
A formula like O(n^2.7) names a large family of similar functions that are
smaller than the formula for very large n (if we ignore the scale). n and
n*n are both in O(n^2.7). n^3 and exp(n) are not in O(n^2.7). Now, a clever
divide and conquer matrix multiplication algorithm is O(n^2.7) and so better
than the simple O(n^3) one for large matrices.
Big_O (asymptotic) formula are simpler and easier to work with than the
because we can ignore so much: instead of 2^n +n^2.7+123.4n we write
2^n or exponential.
Timing formula are expressed in terms of the size n of the data. To simplify
the formulas we remove all constant factors: 123*n*n is replaced by n*n.
We also ignore the lower order terms: n*n + n + 5 becomes n*n. So
123*n^2 +200*n+5 is in O(n^2).
To be very precise and formal, here is the classic text book definition:
f(n) is in O( g(n) ) iff for some constant c>0, some number n0, and all n > n0 ( f(n) <= c * g(n) ).
This means that to show that f is in O(g) then you have to find a constant
multiplier c and an number n0 so that for n bigger than n0, f(n) is less than or equal to c*g(n).
For example 123n is in O(n^2) because for all n> 123, 123n <= n^2.
So, by choosing
n0=123 and c=1 we have 123n <= 1 * n^2.
We say f and g are
if and only if both (1) f(n) is in O(g(n)) and (2) g(n) is in O(f(n)).
So n^2-3 is asymptotically equivalent to 1+2n+123n`^2.
There is another way to look at the ordering of these functions. We can look at
the limit L of the ratio of the functions for large values:
- If L=0 then f(n) in O(g(n)).
- If L=∞ then g(n) in O(f(n)).
- If L is a finite non-zero constant then f(n) is asymptotically equivalent to g(n).
In CSCI202 I expect you to take these facts on trust. Proofs will follow in
the upper division courses.
Exercise: Reduce each formula to one the classic "big_O"s listed.
- log(n) + 3n
- 200 n + n * log(n).
- n log(n)
Classes like CSCI431 an MATH372. Or hit the stacks and Wikipedia.
There are everyday problems that force you to find needles in haystacks.
Problems like this force a computer to do a lot of work to solve them. You
need to learn to spot these, and try to avoid them if possible.
We normally consider polynomial algorithms as efficient and non-polynomial
ones as hard. Computer scientists have discovered a large class of problems
that don't seem to have polynomial solutions, even though we can check the
correctness of the answer efficiently. A common example is to find the
shortest route that visits every city in a country in the shortest time.
This is the famous Traveling Salesperson's Problem. Warning: you can waste
a lot of time trying to find an efficient solution to this problem.
Each discipline (Mathematics, Physics, . . . ) has its own "Classic" algorithms. In
computer science the most famous algorithms fall into to two types: sorting
and searching. Other classes of algorithm include these involved in
graphs, optimization, graphics, compiling, operating systems, etc.
Here is my personal selection and classification of Searching and Sorting
We give searching algorithms a collection of data and a key. Their goal is to find an item in the
collection that matches the key. The algorithms that work best depend on the structure of the given
- (Direct Access): The algorithm calculate the address of the data from a given data value. Example:
arrays. Example: getting direct access data from a disk. This avoids the need to look for the data! O(1).
- (Linear Search): The algorithm tries each item in turn until the key matches it. Finds unique items
or can create a set of matching items. O(n).
- (Binary Search): The collection of data must be sorted, Look at the middle one, if it is too big, try the
first half, but if it is too small, try the other half. Repeat. O(log n).
- (Hashing): The algorithm calculates a value from the key that gives the address of a data structure that
has many items of data in it. Then it searches the data structure. Works well (O(1) ) when you have at
least twice as much storage as the data and the rate of increase is small. For very large n, O(log n).
- (Indexes): An Index is a special additional data structure that records where each key value can be
found in the main data structure. It takes space and time to maintain but lets you retrieve the data faster.
Indexes may be direct or need searching. If the index is optimal, the time is O(√ n).
- (Trees): These are special data structures that can speed up searches. A tree has nodes. Each node
has an item of data. Some nodes are leaves, and the rest have branches leading to another tree. The
algorithm chooses one branch to follow to find the data. If the tree is balanced (all branches have nearly
the same length) this gives O(log n) time. If the tree is not balanced, the worst case is O(n). There exist
special forms that use O(log n) to maintain O(log n) search.
- (Data Bases): These are large, complex data structures stored on disk. They have a complex and rigid
structure that takes careful planning. They use all the searching and sorting algorithms and data
structures to help users store and retrieve the data they want. Take CSCI350 and CSCI580? to learn
- Combinations: you can design data so that two or three different searches
are used to find data. For example: A hash code gives you the starting
point for a linear search. Direct access to an index gives you the address
that directs you to the block of disk storage that contains the data, in
this block is another index that has the number of the data record, and the
actual byte is then calculated directly.
We give a sorting algorithm a collection of items. It changes the collection so that the data items are (in
some sense or other) increasing. The algorithm needs a comparison operation that compares two items
and tells whether one in smaller than the other. An example for numbers is "a < b" but in general we
might have to create a Boolean function "compare" to do the comparison.
- (Slow Sort): Throw all the data in the air . . . If it comes down in the right order, stop, else repeat.
O(n!). Example of a very bad algorithm.
- (Bubble Sort): Scan the data from first to last, if two adjacent items are out of order, swap them. Repeat
until they make no swaps. The beginner's favorite sort. Works OK if the data is almost in the right order.
There were many clever optimizations that often slowed this algorithm down! O(n*n)
- (Cocktail shaker Sort): a variation on bubble sort.
- (Insertion Sort): Like a Bridge player sorts a hand of cards. Mentally split the hand into a sorted and
unsorted part. Take each unsorted item and search the sorted items to see where it fits. O(n^2). Is
simple enough that for small amounts of data (n: 1..10) it is fast enough.
- (Selection Sort): Find the maximum item and swap it with the last item. Repeat with all but the last item.
Gives a fixed number of swaps. Easy to understand. Always executes n swaps. Time O(n^2).
- (Shell Sort): Mr. Shell improved on bubble sort by swapping items that are
not adjacent. Complex and clever. Basic idea: For a sequence of
decreasing ps, take every p'th item starting with the first and sort
them, next every p'th item starting with the second, repeat with
3rd..(p-1)th. Then decrease p and do it again. Different speeds for
different sequences of p's. Average speed O(n^p) where 1<p<=2. The
divide-and-conquer algorithms (below) are better.
- (Quick Sort): Tony Hoare's clever idea. Partition the data in two so that every item in one part is
less than any item in the other part. Sort each part (recursively) . . . Good performance on random
data: O(n log n) but has a bad O(n^2) worst case performance.
- (Merge Sort): Divide data into two equally sized halves. Sort each half. Merge to two halves. Good
performance: worst case is O(n log n). However, needs clever programming and extra storage to handle
the merge. For small amounts of data, this tends to be slower than other algorithms because it copies
data into spare storage and then merges it back where it belongs.
- (Heap Sort): Uses Robert Floyd's clever data structure called a heap. A
heap is a binary tree structure (each node has two children) that has the
big items on top of the small ones (parents > children), and is always
balanced (all branches have nearly equal lengths). It is stored without
pointers. Instead the data is in an array and node a is the parent of
2*a and 2*a+1. Floyd worked out a clever way to insert new data in the
array so that it remains a heap in O(log n). He also found a way to remove
the biggest items and heapify the rest in O(log n). First insert all
n items in a heap (O(n log n) ) and then extract them (O(n log n) ),
top down, we get a worst and average case O(n log n) algorithm. However, on random data, Quick sort
will often be faster.
- (Radix Sort): The data needs to be expressed as a decimal number or character string. First sort with
respect to the first digit/character. Then sort each part by the second digit/character. This is a neat
algorithm for short keys. But because size of key is O( log n) the time is O(n * log n).
- Combinations: We often combine different algorithms to suit a particular
circumstance. For example, when I have to manually sort 20 or more pieces
of work (2 or more times a week...), I don't have room to handle the recursion
needed for Merge or Quick sort, or table space for a heap. So I take
each 10 pieces of work and sort using insertion sort, and then merge
the resulting sorted stacks. But sometimes I use a manual sort
based on techniques developed for sorting magnetic tape data, and now
obsolete. Here you take the items and place them into sorted
runs by adding them on the top or bottom of sorted piles... and then
merge the result. You might call this
because I use a Double-Ended Queue to hold the runs. This is just
a curiosity and not a famous piece of computer science.
. . . . . . . . . ( end of section What are the important algorithms of Computer Science) <<Contents | End>>
Go back to the start of this document and look at the list of questions ...
try to write down, from memory, a short, personal, answer to each one?
- Define what an algorithm is.
- What is a searching algorithm?
- Name two algorithms often used to search for things.
If you have a choice, which is the faster of your two searching algorithms.
- What is a sorting algorithm?
- Name four algorithms often used for sorting.
If you have a large number of random items which of these
sorting algorithm is likely to be fastest?
- Find the Big_O of 2000+300*n+2*n^2.
- What is the Big_O worst time with n items for a linear search, binary search,
bubble sort, and merge sort.
- Name a sorting algorithm that has a good average behaviour on randome data
but a slow behavior on some data.
. . . . . . . . . ( end of section Review Questions) <<Contents | End>>
. . . . . . . . . ( end of section Algorithms) <<Contents | End>>
Dr. Botting wants to acknowledge the excellent
help and advice given by Dr. Zemoudeh
on this document. Whatever errors remain are Dr. Botting's mistakes.
accessor::=`A Function that accesses information in an object with out changing the object in any visible way".
In C++ this is called a "const function".
In the UML it is called a query.
Algorithm::=A precise description of a series of steps to attain a goal,
[ Algorithm ]
class::="A description of a set of similar objects that have similar data plus the functions needed to manipulate the data".
constructor::="A Function in a class that creates new objects in the class".
Data_Structure::=A small data base.
destructor::=`A Function that is called when an object is destroyed".
Function::programming=A selfcontained and named piece of program that knows how to do something.
Gnu::="Gnu's Not Unix", a long running open source project that supplies a
very popular and free C++ compiler.
mutator::="A Function that changes an object".
object::="A little bit of knowledge -- some data and some know how". An
object is instance of a class.
objects::=plural of object.
Current paradigm for programming.
Semantics::=Rules determining the meaning of correct statements in a language.
a previous paradigm for programming.
STL::="The standard C++ library of classes and functions" -- also called the
"Standard Template Library" because many of the classes and functions will work
with any kind of data.
Syntax::=The rules determining the correctness and structure of statements in a language, grammar.
Q::software="A program I wrote to make software easier to develop",
TBA::="To Be Announced", something I should do.
TBD::="To Be Done", something you have to do.
UML::="Unified Modeling Language".
void::C++Keyword="Indicates a function that has no return". | http://www.csci.csusb.edu/dick/cs202/alg.html | 13 |
53 | In mathematics, the concept of sign originates from the property of every non-zero real number to be positive or negative. Zero itself is signless, although in some contexts it makes sense to consider a signed zero. Along its application to real numbers, "change of sign" is used throughout mathematics and physics to denote the additive inverse (multiplication to −1), even for quantities which are not real numbers (so, which are not prescribed to be either positive, negative, or zero). Also, the word "sign" can indicate aspects of mathematical objects that resemble positivity and negativity, such as the sign of a permutation (see below).
Sign of a number
A real number is said to be positive if it is greater than zero, and negative if it is less than zero. The attribute of being positive or negative is called the sign of the number. Zero itself is not considered to have a sign. Also, signs are not defined for complex numbers, although the argument generalizes it in some sense.
In common numeral notation (which is used in arithmetic and elsewhere), the sign of a number is often denoted by placing a plus sign or a minus sign before the number. For example, +3 would denote a positive 3, and −3 would denote a negative 3. When no plus or minus sign is given, the default interpretation is that a number is positive. Because of this notation, as well as the definition of negative numbers through subtraction, the minus sign is perceived to have a strong association with negative numbers (of the negative sign). Likewise, "+" associates with positivity.
In algebra, a minus sign is usually thought of as representing the operation of additive inverse (sometimes called negation), with the additive inverse of a positive number being negative and the additive inverse of a negative number being positive. In this context, it makes sense to write −(−3) = +3.
Any non-zero number can be changed to a positive one using the absolute value function. For example, the absolute value of −3 and the absolute value of 3 are both equal to 3. In symbols, this would be written |−3| = 3 and |3| = 3.
Sign of zero
The number zero is neither positive nor negative, and therefore has no sign. In arithmetic, +0 and −0 both denote the same number 0, which is the additive inverse of itself.
In some contexts, such as signed number representations in computing, it makes sense to consider signed versions of zero, with positive zero and negative zero being different numbers (see signed zero).
One also sees +0 and −0 in calculus and mathematical analysis when evaluating one-sided limits. This notation refers to the behaviour of a function as the input variable approaches 0 from positive or negative values respectively; these behaviours are not necessarily the same.
Terminology for signs
Because zero is neither positive nor negative, the following phrases are sometimes used to refer to the sign of an unknown number:
- A number is positive if it is greater than zero.
- A number is negative if it is less than zero.
- A number is non-negative if it is greater than or equal to zero.
- A number is non-positive if it is less than or equal to zero.
Thus a non-negative number is either positive or zero, while a non-positive number is either negative or zero. For example, the absolute value of a real number is always non-negative, but is not necessarily positive.
The same terminology is sometimes used for functions that take real or integer values. For example, a function would be called positive if all of its values are positive, or non-negative if all of its values are non-negative.
Sign convention
In many contexts the choice of sign convention (which range of values is considered positive and which negative) is natural, whereas in others the choice is arbitrary subject only to consistency, the latter necessitating an explicit sign convention.
Sign function
The sign function or signum function is sometimes used to extract the sign of a number. This function is usually defined as follows:
Thus sgn(x) is 1 when x is positive, and sgn(x) is −1 when x is negative. For nonzero values of x, this function can also be defined by the formula
where |x| is the absolute value of x.
Meanings of sign
Sign of an angle
In many contexts, it is common to associate a sign with the measure of an angle, particularly an oriented angle or an angle of rotation. In such a situation, the sign indicates whether the angle is in the clockwise or counterclockwise direction. Though different conventions can be used, it is common in mathematics to have counterclockwise angles count as positive, and clockwise angles count as negative.
It is also possible to associate a sign to an angle of rotation in three dimensions, assuming the axis of rotation has been oriented. Specifically, a right-handed rotation around an oriented axis typically counts as positive, while a left-handed rotation counts as negative.
Sign of a change
When a quantity x changes over time, the change in the value of x is typically defined by the equation
Using this convention, an increase in x counts as positive change, while a decrease of x counts as negative change. In calculus, this same convention is used in the definition of the derivative. As a result, any increasing function has positive derivative, while a decreasing function has negative derivative.
Sign of a direction
In analytic geometry and physics, it is common to label certain directions as positive or negative. For a basic example, the number line is usually drawn with positive numbers to the right, and negative numbers to the left:
On the Cartesian plane, the rightward and upward directions are usually thought of as positive, with rightward being the positive x-direction, and upward being the positive y-direction. If a displacement or velocity vector is separated into its vector components, then the horizontal part will be positive for motion to the right and negative for motion to the left, while the vertical part will be positive for motion upward and negative for motion downward.
Signedness in computing
In computing, a numeric value may be either signed or unsigned, depending on whether the computer is keeping track of a sign for the number. By restricting a variable to non-negative values only, one more bit can be used for storing the value of a number.
Because of the way arithmetic is done within computers, the sign of a signed variable is usually not stored as a single independent bit, but is instead stored using two's complement or some other signed number representation.
Other meanings
In addition to the sign of a real number, the word sign is also used in various related ways throughout mathematics and the sciences:
- Words up to sign mean that for a quantity q is known that either q = Q or q = −Q for certain Q. In is often expressed as q = ±Q. For real numbers, it means that only the absolute value |q| of the quantity is known. For complex numbers and vectors, a quantity known up to sign is a stronger condition than a quantity with known magnitude: aside Q and −Q, there are many other possible values of q such that |q| = |Q|.
- The sign of a permutation is defined to be positive if the permutation is even, and negative if the permutation is odd.
- In graph theory, a signed graph is a graph in which each edge has been marked with a positive or negative sign.
- In mathematical analysis, a signed measure is a generalization of the concept of measure in which the measure of a set may have positive or negative values.
- In a signed-digit representation, each digit of a number may have a positive or negative sign.
- The ideas of signed area and signed volume are sometimes used when it is convenient for certain areas or volumes to count as negative. This is particularly true in the theory of determinants.
- In physics, any electric charge comes with a sign, either positive or negative. By convention, a positive charge is a charge with the same sign as that of a proton, and a negative charge is a charge with the same sign as that of an electron. | http://en.wikipedia.org/wiki/Positive_and_negative_numbers | 13 |
52 | In the following sections, we'll examine the standard library operations used to create and manipulate strings.
The simplest form of declaration for a string simply names a new variable, or names a variable along with the initial value for the string. This form was used extensively in the example graph program given in Section 9.3.2. A copy constructor also permits a string to be declared that takes its value from a previously defined string.
string s1; string s2 ("a string"); string s3 = "initial value"; string s4 (s3);
In these simple cases the capacity is initially exactly the same as the number of characters being stored. Alternative constructors let you explicitly set the initial capacity. Yet another form allows you to set the capacity and initialize the string with repeated copies of a single character value.
string s6 ("small value", 100);// holds 11 values, can hold 100 string s7 (10, '\n'); // holds ten newline charactersInitializing from Iterators
Finally, like all the container classes in the standard library, a string can be initialized using a pair of iterators. The sequence being denoted by the iterators must have the appropriate type of elements.
string s8 (aList.begin(), aList.end());
As with the vector data type, the current size of a string is yielded by the size() member function, while the current capacity is returned by capacity(). The latter can be changed by a call on the reserve() member function, which (if necessary) adjusts the capacity so that the string can hold at least as many elements as specified by the argument. The member function max_size() returns the maximum string size that can be allocated. Usually this value is limited only by the amount of available memory.
cout << s6.size() << endl; cout << s6.capacity() << endl; s6.reserve(200); // change capacity to 200 cout << s6.capacity() << endl; cout << s6.max_size() << endl;
The member function length() is simply a synonym for size(). The member function resize() changes the size of a string, either truncating characters from the end or inserting new characters. The optional second argument for resize() can be used to specify the character inserted into the newly created character positions.
s7.resize(15, '\t'); // add tab characters at end cout << s7.length() << endl; // size should now be 15
The member function empty() returns true if the string contains no characters, and is generally faster than testing the length against a zero constant.
if (s7.empty()) cout << "string is empty" << endl;
A string variable can be assigned the value of either another string, a literal C-style character array, or an individual character.
s1 = s2; s2 = "a new value"; s3 = 'x';
The operator += can also be used with any of these three forms of argument, and specifies that the value on the right hand side should be appended to the end of the current string value.
s3 += "yz"; // s3 is now xyz
The more general assign() and append() member functions let you specify a subset of the right hand side to be assigned to or appended to the receiver two arguments, pos and n, indicate that the n values following position pos should be assigned/appended.
s4.assign (s2, 0, 3); // assign first three characters s4.append (s5, 2, 3); // append characters 2, 3 and 4
The addition operator + is used to form the catenation of two strings. The + operator creates a copy of the left argument, then appends the right argument to this value.
cout << (s2 + s3) << endl; // output catenation of s2 and s3
As with all the containers in the standard library, the contents of two strings can be exchanged using the swap() member function.
s5.swap (s4); // exchange s4 and s5
An individual character from a string can be accessed or assigned using the subscript operator. The member function at() is almost a synonym for this operation except an out_of_range exception will be thrown if the requested location is greater than or equal to size().
cout << s4 << endl; // output position 2 of s4 s4 = 'x'; // change position 2 cout << s4.at(2) << endl; // output updated value
The member function c_str() returns a pointer to a null terminated character array, whose elements are the same as those contained in the string. This lets you use strings with functions that require a pointer to a conventional C-style character array. The resulting pointer is declared as constant, which means that you cannot use c_str() to modify the string. In addition, the value returned by c_str() might not be valid after any operation that may cause reallocation (such as append() or insert()). The member function data() returns a pointer to the underlying character buffer.
char d; strcpy(d, s4.c_str()); // copy s4 into array d
The member functions begin() and end() return beginning and ending random-access iterators for the string. The values denoted by the iterators will be individual string elements. The functions rbegin() and rend() return backwards iterators.Invalidating Iterators
The string member functions insert() and erase() are similar to the vector functions insert() and erase(). Like the vector versions, they can take iterators as arguments, and specify the insertion or removal of the ranges specified by the arguments. The function replace() is a combination of erase and insert, in effect replacing the specified range with new values.
s2.insert(s2.begin()+2, aList.begin(), aList.end()); s2.erase(s2.begin()+3, s2.begin()+5); s2.replace(s2.begin()+3, s2.begin()+6, s3.begin(), s3.end());
In addition, the functions also have non-iterator implementations. The insert() member function takes as argument a position and a string, and inserts the string into the given position. The erase function takes two integer arguments, a position and a length, and removes the characters specified. And the replace function takes two similar integer arguments as well as a string and an optional length, and replaces the indicated range with the string (or an initial portion of a string, if the length has been explicitly specified).
s3.insert (3, "abc"); // insert abc after position 3 s3.erase (4, 2); // remove positions 4 and 5 s3.replace (4, 2, "pqr"); // replace positions 4 and 5 with pqr
The member function copy() generates a substring then assigns this substring to the char* target given as the first argument. The range of values for the substring is specified either by an initial position, or a position and a length.
s3.copy (s4, 2); // assign to s4 positions 2 to end of s3 s5.copy (s4, 2, 3); // assign to s4 positions 2 to 4 of s5
The member function substr() returns a string that represents a portion of the current string. The range is specified by either an initial position, or a position and a length.
cout << s4.substr(3) << endl; // output 3 to end cout << s4.substr(3, 2) << endl; // output positions 3 and 4
The member function compare() is used to perform a lexical comparison between the receiver and an argument string. Optional arguments permit the specification of a different starting position or a starting position and length of the argument string. See Section 13.6.5 for a description of lexical ordering. The function returns a negative value if the receiver is lexicographically smaller than the argument, a zero value if they are equal and a positive value if the receiver is larger than the argument.
The relational and equality operators (<, <=, ==, !=, >= and >) are all defined using the comparison member function. Comparisons can be made either between two strings, or between strings and ordinary C-style character literals.
The member function find() determines the first occurrence of the argument string in the current string. An optional integer argument lets you specify the starting position for the search. (Remember that string index positions begin at zero.) If the function can locate such a match, it returns the starting index of the match in the current string. Otherwise, it returns a value out of the range of the set of legal subscripts for the string. The function rfind() is similar, but scans the string from the end, moving backwards.
s1 = "mississippi"; cout << s1.find("ss") << endl; // returns 2 cout << s1.find("ss", 3) << endl; // returns 5 cout << s1.rfind("ss") << endl; // returns 5 cout << s1.rfind("ss", 4) << endl; // returns 2
The functions find_first_of(), find_last_of(), find_first_not_of(), and find_last_not_of() treat the argument string as a set of characters. As with many of the other functions, one or two optional integer arguments can be used to specify a subset of the current string. These functions find the first (or last) character that is either present (or absent) from the argument set. The position of the given character, if located, is returned. If no such character exists then a value out of the range of any legal subscript is returned.
i = s2.find_first_of ("aeiou"); // find first vowel j = s2.find_first_not_of ("aeiou", i); // next non-vowel
©Copyright 1996, Rogue Wave Software, Inc. | http://www.math.hkbu.edu.hk/parallel/pgi/doc/pgC++_lib/stdlibug/str_7474.htm | 13 |
100 | where k is a positive constant.
If F is the only force acting on the system, the system is called a simple harmonic oscillator, and it undergoes simple harmonic motion: sinusoidal oscillations about the equilibrium point, with a constant amplitude and a constant frequency (which does not depend on the amplitude).
- Oscillate with a frequency smaller than in the non-damped case, and an amplitude decreasing with time (underdamped oscillator).
- Decay to the equilibrium position, without oscillations (overdamped oscillator).
The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a particular value of the friction coefficient and is called "critically damped."
If an external time dependent force is present, the harmonic oscillator is described as a driven oscillator.
Mechanical examples include pendula (with small angles of displacement), masses connected to springs, and acoustical systems. Other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is very important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many manmade devices, such as clocks and radio circuits. They are the source of virtually all sinusoidal vibrations and waves.
Simple harmonic oscillator
A simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a mass m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the mass's position x and a constant k. Balance of forces (Newton's second law) for the system is
Solving this differential equation, we find that the motion is described by the function
The motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. In addition to its amplitude, the motion of a simple harmonic oscillator is characterized by its period T, the time for a single oscillation or its frequency f = 1⁄T, the number of cycles per unit time. The position at a given time t also depends on the phase, φ, which determines the starting point on the sine wave. The period and frequency are determined by the size of the mass m and the force constant k, while the amplitude and phase are determined by the starting position and velocity.
The velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the opposite direction as the displacement.
The potential energy stored in a simple harmonic oscillator at position x is
Damped harmonic oscillator
In real oscillators, friction, or damping, slows the motion of the system. Due to frictional force, the velocity decreases proportional to the acting frictional force. Whereas Simple harmonic motion oscillates with only the restoring force acting on the system, Damped Harmonic motion experiences friction. In many vibrating systems the frictional force Ff can be modeled as being proportional to the velocity v of the object: Ff = −cv, where c is called the viscous damping coefficient.
Balance of forces (Newton's second law) for damped harmonic oscillators is then
This is rewritten into the form
- is called the 'undamped angular frequency of the oscillator' and
- is called the 'damping ratio'.
The value of the damping ratio ζ critically determines the behavior of the system. A damped harmonic oscillator can be:
- Overdamped (ζ > 1): The system returns (exponentially decays) to steady state without oscillating. Larger values of the damping ratio ζ return to equilibrium slower.
- Critically damped (ζ = 1): The system returns to steady state as quickly as possible without oscillating. This is often desired for the damping of systems such as doors.
- Underdamped (ζ < 1): The system oscillates (with a slightly different frequency than the undamped case) with the amplitude gradually decreasing to zero. The angular frequency of the underdamped harmonic oscillator is given by
The Q factor of a damped oscillator is defined as
Q is related to the damping ratio by the equation
Driven harmonic oscillators
Driven harmonic oscillators are damped oscillators further affected by an externally applied force F(t).
Newton's second law takes the form
It is usually rewritten into the form
This equation can be solved exactly for any driving force using the solutions z(t) to the unforced equation, which satisfy
and which can be expressed as damped sinusoidal oscillations,
in the case where ζ ≤ 1. The amplitude A and phase φ determine the behavior needed to match the initial conditions.
Step input
In the case ζ < 1 and a unit step input with x(0) = 0:
the solution is:
with phase φ given by
The time an oscillator needs to adapt to changed external conditions is of the order τ = 1/(ζω0). In physics, the adaptation is called relaxation, and τ is called the relaxation time.
In electrical engineering, a multiple of τ is called the settling time, i.e. the time necessary to ensure the signal is within a fixed departure from final value, typically within 10%. The term overshoot refers to the extent the maximum response exceeds final value, and undershoot refers to the extent the response falls below final value for times following the maximum response.
Sinusoidal driving force
In the case of a sinusoidal driving force:
where is the driving amplitude and is the driving frequency for a sinusoidal driving mechanism. This type of system appears in AC driven RLC circuits (resistor-inductor-capacitor) and driven spring systems having internal mechanical resistance or external air resistance.
The general solution is a sum of a transient solution that depends on initial conditions, and a steady state that is independent of initial conditions and depends only on the driving amplitude , driving frequency, , undamped angular frequency , and the damping ratio .
The steady-state solution is proportional to the driving force with an induced phase change of :
is the phase of the oscillation relative to the driving force, if the arctan value is taken to be between -180 degrees and 0 (that is, it represents a phase lag, for both positive and negative values of the arctan's argument).
For a particular driving frequency called the resonance, or resonant frequency , the amplitude (for a given ) is maximum. This resonance effect only occurs when , i.e. for significantly underdamped systems. For strongly underdamped systems the value of the amplitude can become quite large near the resonance frequency.
The transient solutions are the same as the unforced () damped harmonic oscillator and represent the systems response to other events that occurred previously. The transient solutions typically die out rapidly enough that they can be ignored.
Parametric oscillators
A parametric oscillator is a harmonic oscillator whose parameters oscillate in time. A familiar example of both parametric and driven oscillation is playing on a swing. Rocking back and forth pumps the swing as a driven harmonic oscillator, but once moving, the swing can also be parametrically driven by alternately standing and squatting at key points in the swing. The varying of the parameters drives the system. Examples of parameters that may be varied are its resonance frequency and damping .
Parametric oscillators are used in many applications. The classical varactor parametric oscillator oscillates when the diode's capacitance is varied periodically. The circuit that varies the diode's capacitance is called the "pump" or "driver". In microwave electronics, waveguide/YAG based parametric oscillators operate in the same fashion. The designer varies a parameter periodically to induce oscillations.
Parametric oscillators have been developed as low-noise amplifiers, especially in the radio and microwave frequency range. Thermal noise is minimal, since a reactance (not a resistance) is varied. Another common use is frequency conversion, e.g., conversion from audio to radio frequencies. For example, the Optical parametric oscillator converts an input laser wave into two output waves of lower frequency ().
Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing, since the action appears as a time varying modification on a system parameter. This effect is different from regular resonance because it exhibits the instability phenomenon.
Universal oscillator equation
is known as the universal oscillator equation since all second order linear oscillatory systems can be reduced to this form. This is done through nondimensionalization.
If the forcing function is f(t) = cos(ωt) = cos(ωtcτ) = cos(ωτ), where ω = ωtc, the equation becomes
The solution to this differential equation contains two parts, the "transient" and the "steady state".
Transient solution
The solution based on solving the ordinary differential equation is for arbitrary constants c1 and c2
The transient solution is independent of the forcing function.
Steady-state solution
Supposing the solution is of the form
Its derivatives from zero to 2nd order are
Substituting these quantities into the differential equation gives
Dividing by the exponential term on the left results in
Equating the real and imaginary parts results in two independent equations
Amplitude part
Squaring both equations and adding them together gives
Compare this result with the theory section on resonance, as well as the "magnitude part" of the RLC circuit. This amplitude function is particularly important in the analysis and understanding of the frequency response of second-order systems.
Phase part
To solve for φ, divide both equations to get
This phase function is particularly important in the analysis and understanding of the frequency response of second-order systems.
Full solution
Combining the amplitude and phase portions results in the steady-state solution
The solution of original universal oscillator equation is a superposition (sum) of the transient and steady-state solutions
For a more complete description of how to solve the above equation, see linear ODEs with constant coefficients.
Equivalent systems
Harmonic oscillators occurring in a number of areas of engineering are equivalent in the sense that their mathematical models are identical (see universal oscillator equation above). Below is a table showing analogous quantities in four harmonic oscillator systems in mechanics and electronics. If analogous parameters on the same line in the table are given numerically equal values, the behavior of the oscillators—their output waveform, resonant frequency, damping factor, etc.—are the same.
|Translational Mechanical||Torsional Mechanical||Series RLC Circuit||Parallel RLC Circuit|
|Mass||Moment of inertia||Inductance||Capacitance|
|Spring constant||Torsion constant||Elastance||Susceptance|
|Drive force||Drive torque|
|Undamped resonant frequency :|
Application to a conservative force
The problem of the simple harmonic oscillator occurs frequently in physics, because a mass at equilibrium under the influence of any conservative force, in the limit of small motions, behaves as a simple harmonic oscillator.
A conservative force is one that has a potential energy function. The potential energy function of a harmonic oscillator is:
Given an arbitrary potential energy function , one can do a Taylor expansion in terms of around an energy minimum () to model the behavior of small perturbations from equilibrium.
Because is a minimum, the first derivative evaluated at must be zero, so the linear term drops out:
The constant term V(x0) is arbitrary and thus may be dropped, and a coordinate transformation allows the form of the simple harmonic oscillator to be retrieved:
Thus, given an arbitrary potential energy function with a non-vanishing second derivative, one can use the solution to the simple harmonic oscillator to provide an approximate solution for small perturbations around the equilibrium point.
Simple pendulum
Assuming no damping and small amplitudes, the differential equation governing a simple pendulum is
The solution to this equation is given by:
where is the largest angle attained by the pendulum. The period, the time for one complete oscillation, is given by divided by whatever is multiplying the time in the argument of the cosine ( here).
Pendulum swinging over turntable
Simple harmonic motion can in some cases be considered to be the one-dimensional projection of two-dimensional circular motion. Consider a long pendulum swinging over the turntable of a record player. On the edge of the turntable there is an object. If the object is viewed from the same level as the turntable, a projection of the motion of the object seems to be moving backwards and forwards on a straight line orthogonal to the view direction, sinusoidally like the pendulum.
Spring/mass system
When a spring is stretched or compressed by a mass, the spring develops a restoring force. Hooke's law gives the relationship of the force exerted by the spring when the spring is compressed or stretched a certain length:
where F is the force, k is the spring constant, and x is the displacement of the mass with respect to the equilibrium position. This relationship shows that the distance of the spring is always opposite to the force of the spring.
By using either force balance or an energy method, it can be readily shown that the motion of this system is given by the following differential equation:
...the latter evidently being Newton's second law of motion.
If the initial displacement is A, and there is no initial velocity, the solution of this equation is given by:
Given an ideal massless spring, is the mass on the end of the spring. If the spring itself has mass, its effective mass must be included in .
Energy variation in the spring–damping system
In terms of energy, all systems have two types of energy, potential energy and kinetic energy. When a spring is stretched or compressed, it stores elastic potential energy, which then is transferred into kinetic energy. The potential energy within a spring is determined by the equation
When the spring is stretched or compressed, kinetic energy of the mass gets converted into potential energy of the spring. By conservation of energy, assuming the datum is defined at the equilibrium position, when the spring reaches its maximum potential energy, the kinetic energy of the mass is zero. When the spring is released, it tries to return to equilibrium, and all its potential energy converts to kinetic energy of the mass.
See also
- Anharmonic oscillator
- Critical speed
- Effective mass (spring-mass system)
- Normal mode
- Parametric oscillator
- Q factor
- Quantum harmonic oscillator
- Radial harmonic oscillator
- Ogata, Katsuhiko (2004). System dynamics (4th ed.). Upper Saddle River, NJ: Pearson Education. ISBN 9780131247147.
- Ajoy Ghatak (2005). Optics, 3E (3rd ed.). Tata McGraw-Hill. p. 6.10. ISBN 978-0-07-058583-6.
- Case, William. "Two ways of driving a child's swing". Retrieved 27 November 2011.
- Case, W. B. (1996). "The pumping of a swing from the standing position". American Journal of Physics 64 (3): 215–211. doi:10.1119/1.18209.
- Roura, P.; Gonzalez, J.A. (2010). "Towards a more realistic description of swing pumping due to the exchange of angular momentum". European Journal of Physics 31 (5): 1195–1207. doi:10.1088/0143-0807/31/5/020.
- Serway, Raymond A.; Jewett, John W. (2003). Physics for Scientists and Engineers. Brooks/Cole. ISBN 0-534-40842-7.
- Tipler, Paul (1998). Physics for Scientists and Engineers: Vol. 1 (4th ed.). W. H. Freeman. ISBN 1-57259-492-6.
- Wylie, C. R. (1975). Advanced Engineering Mathematics (4th ed.). McGraw-Hill. ISBN 0-07-072180-7.
- Harmonic Oscillator from The Chaos Hypertextbook
- A Java applet of harmonic oscillator with damping proportional to velocity or damping caused by dry friction | http://en.wikipedia.org/wiki/Harmonic_oscillator | 13 |
61 | The Mole and Molar Mass
After completing this unit you should be able to:
We have seen that atoms of different elements have different masses, and that atoms combine in whole number ratios to form compounds. Although as a practical necessity we measure the quantity of matter using mass, it is essential when considering properties and reactions to know the relative number of atoms of each element present in a sample. Even the smallest measurable mass of matter contains trillions of atoms, so chemists use a unit of amount called the mole (abbreviated mol).
By definition, one mole is the number of atoms in 12 g of carbon-12. This number, called Avogadro's number, has been measured to be approximately 6.022 x 1023 (to 4 s.f). Avogadro's number is actually known to about nine significant figures, which means the uncertainty is +/- 100 trillion atoms! Although this might seem very imprecise, in practice it is much more precise than our mass measurement.
Chemists use the mole in the same way that grocers use the dozen for groups of 12 and stationers use the ream for groups of 500. By grouping numbers together, we get a smaller number to use in practical situations, 2 gross of paper clips for example, instead of 288 paper clips. Since we are most frequently concerned with relative amounts, we can use the mole without being overly concerned about exactly how many objects it represents, and we can use Avogadro's number to convert it to an actual number if needed.
Since one atom of carbon-12 has a mass of 12 atomic mass units, and one atom of tin-120 has a mass of 120 u, it follows that one mole of tin-120 atoms will have ten times the mass of one mole of carbon-12 atoms, i.e. 120 g. In general, the mass in grams of one mole of atoms of any element will be numerically equivalent to its atomic mass, i.e. the atomic mass unit is equivalent to the unit grams per mole (g/mol).
The number of atoms in a sample of an element can be counted by weighing, in the same way that banks count pennies. As an example, let's determine how many atoms are in a sample of silicon that has a mass of 5.23 g. Consulting a periodic table, we find that the atomic mass of Si is 28.09 u or 28.09 g/mol. We use this as a conversion factor to determine how many moles of silicon are in our sample:
For most purposes we will use this number of moles as the amount of silicon in our sample. If we want to know how many atoms this is, we can use Avogadro's number (6.022 x 1023 per mol) for the conversion:
The mole is just a number; it can be used for atoms, molecules, ions, electrons, or anything else we wish to refer to. Because we know the formula of water is H2O, for example, then we can say one mole of water molecules contains one mole of oxygen atoms and two moles of hydrogen atoms. One mole of hydrogen atoms has a mass of 1.008 g and 1 mol of oxygen atoms has a mass of 16.00 g, so 1 mol of water has a mass of (2 x 1.008 g) + 16.00 g = 18.02 g. The molar mass of water is 18.02 g/mol.
formula mass of water is 18.02 u. This is the average mass of one
formula unit (in this case a molecule) of water. In general, the molar mass of any molecular compound is the mass in grams
numerically equivalent to the sum of the atomic masses of the atoms in the
For more complex formulas it is convenient to use a table. The molar mass of glucose, C6H12O6, is:
The molar mass of ionic compounds can be calculated similarly, by adding together the atomic masses of all atoms in the formula to get the formula mass, and expressing the answer in units of g/mol. Calcium chloride, CaCl2, has a molar mass of 40.08 + (2 x 35.45) = 110.98 g/mol. Molar mass is generally calculated with two places after the decimal point, and can be rounded to four significant figures.
Molar mass is used as a conversion factor to relate the amount of a substance to its mass.
For example, suppose you need 5.00 moles of magnesium nitride for an experiment. What mass in grams should you weigh out on your balance? First we need the formula for magnesium nitride. Magnesium, an alkaline earth metal, forms cations with charge 2+, and nitrogen, a group 5A element, forms anions with charge 3-. So the formula for magnesium nitride must be Mg3N2, and we can then determine the molar mass as (3 x 24.31 g/mol) + (2 x 14.01 g/mol) = 100.95 g/mol. Using this as a conversion factor, we can calculate the mass of 5.00 moles of magnesium nitride:
The formula for a compound gives the ratio of the elements in terms of numbers of atoms, which is the same ratio when expressed in moles of atoms. Using the molar masses of the elements and the compound, we can express the composition in terms of mass percentage of the elements. For example, carbon dioxide has a formula weight of 44.01 u, made up of 12.01 u for the average mass of 1 carbon atom and 32.00 u for 2 oxygen atoms. One mole of CO2 has a mass of 44.01 g made up of 12.01 g of carbon and 32.00 g of oxygen. The composition of carbon dioxide is calculated as follows:
The formula for a compound, and its composition expressed as percentage by mass, are fixed and unchanging properties of the compound. Any pure sample of carbon dioxide is 72.70 % oxygen by mass.
This property allows us to relate the amount of an element in a compound to the mass of the compound.
As an example, let's calculate the number of moles of iron in 2.98 g of iron(III) oxide. First we need the formula for iron(III) oxide. Since iron(III) is Fe3+ and oxide is O2- the formula must be Fe2O3. We then calculate the molar mass as (2 x 55.85) + (3 x 16.00) = 111.7 + 48.00 = 159.7 g/mol.
The percentage by mass of iron in iron(III) oxide can then be calculated:
This percentage by mass can be used to calculate the mass of iron in the sample:
Finally, the mass of iron can be converted to an amount in moles using the molar mass:
We can arrive at the same answer rather more easily using a mole ratio generated from the formula. The formula tells us that 1 mol of Fe2O3 is composed of 2 mol of iron ions and 3 mol of oxide ions. First we convert our measured mass of iron(III) oxide to moles using the molar mass of the compound, then we convert this to moles of iron using the mole ratio 2 moles of iron per mole of iron(III) oxide:
Mole ratios are used in many important calculations in chemistry. Be sure to write in your calculation the complete unit, moles of iron atoms for example, not just mole (remember: mole is just a number). When your units cancel correctly you can be sure you have made the calculation you intended. | http://www.chem.memphis.edu/bridson/FundChem/T11a1100.htm | 13 |